AI and Ethics: The Perils Ahead
Ilya Ovchinnikov, ML Research TL7 minute read
My name is Ilya, and as a machine learning researcher, it’s kind of my duty to think about AI and the impact it has on our reality. In my previous article, I tackled questions that you might have tried to wrap your mind around: will AI take our jobs – and if so, which ones would be the first? And why is it actually a good thing?
I also brought up another hot topic, one about security standards for AI and privacy concerns it causes. I’d like to discuss these at length, as well as some other dangers that await us on the path to truly ethical AI.
We have already discussed the positive effects of conversational AI at call centers for companies and customers. AI platform vendors are available for everybody in the world. It makes one think about negative use cases. How can scammers and ill-wishers use this technology for fraud and spam purposes?
Phone fraud is a common thing. People lose a lot of money and important personal information due to scam. Scammers constantly come up with new ways to steal money, and it can be difficult to recognize the caller's intention. Conversational AI can be the driving force behind these goals. The same goes for the spam calls.
Hopefully we can fight it the same way – that is, by using AI. Natural language understanding of a dialog system allows us to identify such scenarios and help users recognize them without direct human intervention. Personal AI assistants, like the one we have developed, enable us to detect suspicious callers and block them. We believe that AI vendors have to pay attention to fraud detection and spam prevention, and even block such platform users before a call has been initiated.
Detecting fraud is also critical for banks. Identity information plays a crucial role in security management, and AI can be a huge help for the finance industry.
The truth is that data is more secure when using a dialog system. It follows data privacy and protection policies, removing humans from the process, thus increasing overall security. It may detect fraud activity and inform card holders. If some personally identifiable information is being stolen, conversational agents can refer to additional data and predict fraud. Besides, voiceprint and emotion recognition will add another layer to preventing financial and identity fraud.
On the other hand, such technology as voice cloning could also be dangerous. If you can clone someone's voice, such as some company’s CEO or a grandmother's beloved grandson, and pretend to be that person, you can scam people on a completely new level (it’s not a call to action, just saying). The good news is that compared to other phone call scams, the deep fake threat is still low. But we need to keep this possibility in mind and research the algorithms to detect such cases.
People build relationships with other people by means of dialog. It’s an essential component of social interaction. Will conversational AI change the way we perceive it and will it affect our attitude to robots? As we use it to communicate with devices and contact centers, it has already started to change our relationship with technology. We begin to feel different about it. We humanize technologies in a way we have never done before.
Humans tend to anthropomorphize machines. Anthropomorphism is the phenomenon of seeing nonhuman objects as having human-like qualities. We get this effect, especially when users can interact with a machine using words. The purpose of dialog systems is to create a better experience for humans, and this is important to highlight: we’re not playing God and trying to create some sort of a virtual human.
When interacting with others, people feel emotions. Emotions are something machines can't have by design. How can we guarantee that a discussion with a virtual agent is engaging for the user? Researchers found out (p. 155) that anthropomorphizing robots makes them more social in people's minds. It brings us back to the uncanny valley and the Turing test. We have to aim for a human-like level of communication to make the best impression. AI agents that are able to imitate personality, emotions, empathy, and to recognize them, to deliver the best engagement.
But can conversational AI truly replace humans? Studies show that conversational agents can have a positive effect in certain areas, such as education, therapy, social interaction and friendship. However, a high degree of anthropomorphism can also lead to disappointment if the system fails to meet user expectations.
Lonely, sick or elderly people may benefit from this technology too. Imagine you have been disappointed with the people around you or have no one close to talk to. Advanced anthropomorphism creates ground for trust and openness. It is even possible to fall in love with Artificial Intelligence as, today, people tend to get romantically attached with nonhuman objects.
Even more questions arise if we create a digital avatar or go as far as to create a real body. It will look like a human, and it will have human characteristics, such as gender, race, age, eye color... In that case, it would get much harder to tackle the implications.
We can not predict the future exactly as it will take place, and such applications are an ongoing discussion. We believe that machines will not replace such fundamental bases as human empathy, friendship and love with a simulation of conscience. The long-term consequences of replacing human contact with technology can hardly be imagined.
Now we are at the beginning of rapid spread of conversational AI. It affects every part of our life and we can already feel it. Decades ago, we didn’t have computers and smartphones, and now we can barely imagine our life without them. Now is the era of conversational AI. How will it change the world? Are we witnessing a new technological revolution? How will people perceive it? How to make people trust it? What about privacy and security? Are emotions important? There are a lot of ethical challenges and we need to take heed of their possible implications. So let’s do it together.