santiago silver - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

What potential chatbot security threats could AI introduce?

New chatbot AI technology comes with potential security risks. UC expert Jon Arnold explains what to be aware of when deploying new AI-based voice technology.

Innovations around speech AI technology are relatively new, and the implications for chatbot security aren't well-understood. Speech technology holds great promise and represents new forms of engagement for end users. However, its use can have unintended consequences.

The big changes for speech AI technology pertain to digital assistants and conversational interfaces that enable real-time dialog between workers and chatbots. The security implications here pertain to the accuracy of speech recognition so chatbot AI can be used to automate tasks and improve productivity. In cases where the technology isn't fully evolved, inaccurate speech recognition could create chatbot security problems in which incorrect messages are sent, or messages, documents, files and other sensitive information are sent to the wrong people.

More problematic, hackers -- either inside or outside of your organization -- could assume control of digital assistants. This allows them to eavesdrop on workers or meetings to monitor employee behavior without their knowledge -- or worse, they can listen in on private, sensitive discussions. Going further, hackers could potentially mimic voiceprints to impersonate key personnel, leaving open the possibility for identity theft, fraud, blackmail or extortion.

These are only a few of the potential security problems that can come from using speech AI technology, especially deployments that are not fully evolved or proven in enterprise settings. While chatbot security threats won't be the norm, these scenarios are certainly plausible if you don't carefully vet the offerings.

To minimize chatbot security risks speech technology should be initially deployed in a controlled manner. Include only workers familiar with the potential issues, and, preferably, roll out speech AI in scenarios with limited amounts of sensitive information. IT also needs to exercise patience, as the risks to more general use cases decrease as the applications improve.

IT should be proactive in educating workers on the new risks that come with AI-driven speech technology. Educate users on the need to be vigilant in using strong passwords and being mindful of how easily their voice can be monitored by digital assistants.

Do you have a question for Jon Arnold or any other experts? Ask your enterprise-specific questions today! (All questions are treated anonymously.)

Dig Deeper on Unified Communications Security