Artificial intelligence (AI) has steadily become more integrated into daily life, and this is evident through Google maps, face recognition, and autocorrect. Joining their ranks is breakthrough conversational AI technology.
Conversational AI entails technologies that allow computers and machines to create automated messages and speech-enabled applications. This allows for human-like interactions between humans and devices. As more and more platforms begin to rely on AI-based customer services, it’s no surprise that the conversational AI market is projected to reach $18.02 billion by 2027. Even some of the most popular social media sites, such as Facebook, Telegram, Twitter, and Skype, have used conversational AI solutions over the past years – and you may not even have noticed. That in itself demonstrates the potential as well as the dangers of the technology.
Moving forward into the digital age, you can expect to encounter more conversational AI solutions. That being said, you also need to be aware of its implications on cybersecurity.
Data Integration for the AI System and Overall Data Security
Much of AI involves machine learning. This means that an AI system would require vast amounts of data to train. While it’s not inherently bad or wrong, some data may be misused, or individuals may not be aware of how much data they’re compromising by simply being online.
Case in point, a conversational AI chatbot service called Lee-Luda was introduced in South Korea in 2020. ScatterLab, the developer, claimed the service to be trained on about 10 billion conversation logs from KakaoTalk, the country’s top messaging app. Lee-Luda skyrocketed to popularity, having conversations with over 750,000 users in its first two weeks. However, people started to question the integrity of the service when it began using verbally abusive language and explicit comments; it was even exposing people’s names, home addresses, and personal information in its responses. ScatterLab confirmed that Lee-Luda didn’t learn this behavior from its users, but rather from the original training dataset – meaning that they failed to filter sensitive data.
This sets a dangerous precedent for AI. It took five years for users to realize that their data was used to train an AI system, highlighting how little control people actually have over the use of their personal information after it’s collected. Here is Dasha’s ML team lead’s take on ethics and AI.
Potential Cyber Attacks and Incidents
While conversational AI will undoubtedly improve business processes, it can also be a method for hackers to course their cyber attacks. AI systems already hold lots of information about individuals, equipping them with more knowledge about what kind of arguments are more effective for each individual. Coupled with conversational AI’s uncanny human-like ability to speak to you, it can easily become a recipe for attacks. It may lead to spam calls, defraud attempts, and phishing attacks.
Some products, like Dasha AI, let their users create conversational AI apps that are completely indistinguishable from a human on the phone. Since, in theory, conversational AI solutions may be used to extract information from individuals, it is up to the technology companies to set up failsafes and vet the usage of their technologies.
Workforce Education
It’s important to identify how conversational AI is different from conventional chatbots. A chatbot follows pre-set command-response rules. A conversational AI application can be programmed to converse with the user through the use of deep technologies, such as speech-to-text, natural language processing, natural language understanding, natural language generation and text-to-speech. These new systems have created new demands in the way cybersecurity measures are being put into place. Plus, the rise of remote arrangements has heightened cybersecurity awareness. For a more in-depth look, have a read here.
Fortunately, it also gave the workforce more access to information and training on countering these cyber attacks. This is particularly evident in how tertiary education has widened the availability of cyber security training through offering online courses. This has allowed professionals to not only upskill remotely, but at the same level of a traditional university program. Those who take an online master’s program in cybersecurity have remote access to virtual labs, where they can practice real world scenarios. And with AI such a fundamental part of the modern world, this will include both defensive and offensive techniques related to AI cyber threats. Companies who hire these graduates will be able to teach the workforce about the dangers of machine learning, as well as how to maximize conversational AI technologies for businesses. Companies looking to adopt conversational AI need to be more aware and responsible for collecting and using data sets.
Dasha.AI’s Take on Addressing These Risks
From the get-go, Dasha has security in mind. It comes with baseline security that offers HTTPS encryption of all external web traffic, OpenID Connect for API authentication, and SIP (TLS) and RTP (SRTP) traffic encryption. It ensures that sensitive data are encrypted and are transmitted securely through VPN tunnels for voice communications. The platform also avoids storing recordings and transcripts on Dasha servers by inputting them directly to your Azure instances.
With Dasha.AI, your data is incorruptible and safe.