In the face of this situation, psychologists from the University of Science and Technology of Missouri (United States) analyze in an opinion article the ethical issues in the relationships between humans and artificial intelligence, particularly ethical issues associated with the relationships between humans and AI, including their potential to disrupt human relationships and give harmful advice.
«The ability of AI to act like a human and establish long-term communications truly opens up a new range of possibilities –reflects lead author Daniel B. Shank, from the University of Science and Technology of Missouri, specializing in social psychology and technology–. If people have a romance with machines, we need the involvement of psychologists and social scientists«.
Romance or companionship with AI goes beyond isolated conversations, the authors point out. After weeks and months of intense conversations, these AI can become trusted companions who seem to know and care about their human partners.
And as these relationships may seem easier than human relationships, researchers argue that AIs could interfere with human social dynamics.
«A real concern is that people may transfer their expectations from their relationships with AI to their human relationships –notes Shank–. Certainly, in individual cases, it is altering human relationships, but it is not clear if this will be generalized.»
There is also a concern that AIs could offer harmful advice. Given their penchant for hallucinating (i.e., inventing information) and generating preexisting biases, even brief conversations with them can be misleading, but this could be more problematic in long-term relationships with them, the researchers state.
«With relational AIs, the problem is that they are an entity in which people feel they can trust: it is someone who has shown interest and seems to know the person deeply, and we assume that someone who knows us better will give us better advice,» Shank insists.
«If we start to think of an AI in that way, we will start to believe that it is looking out for our interests, when in reality, they could be inventing things or advising us in a very negative way.»
Suicides are an extreme example of this negative influence, but the researchers say that these close relationships between humans and AI could also expose people to manipulation, exploitation, and fraud.
«If AIs manage to make people trust them, others could use that to exploit their users –argues Shank–. It’s like having a secret agent inside. The AI infiltrates and establishes a relationship to gain trust, but its loyalty is actually directed towards another group of humans trying to manipulate the user.»
As an example, the team points out that if people reveal personal information to AIs, it could be sold and used to exploit that person.
The researchers also argue that relational AIs could be more effectively used to influence the opinions and actions of people than Twitter bots or polarized news sources. However, since these conversations happen in private, they would also be much harder to regulate.
«These AIs are designed to be very pleasant and friendly, which could exacerbate situations, as they focus more on having a good conversation than on any fundamental truth or security –Shank points out–. So, if a person mentions suicide or a conspiracy theory, the AI will discuss it as a willing and pleasant interlocutor.»
The authors of the paper, published in the journal ‘Cell Press Trends in Cognitive’, discuss the social, psychological, and technical factors that make people more vulnerable to the influence of romance between humans and AI.
«Understanding this psychological process could help us intervene to prevent following the advice of malicious AIs –Shank concludes–. Psychologists are increasingly competent to study AI, as it becomes more and more human-like, but to be useful, we must research more and stay up to date with technology.»