Support us

Support our activities and help children and adults in need. For 15 years we have been helping children and adults solve their problems online. Get involved too! Every help counts! Just click on the DONATE button.


mini_boyandrobot.jpg One of the key features of large language models of generative artificial intelligence is their ability to converse with us through natural human language. This naturally leads us to personify these AI tools and attribute human traits to them – it becomes our virtual assistant, persona, friend, helper, and in many cases, even companion. AI conversational tools compete effectively with living people, raising the question of whether in the near future they might largely replace human conversation – after all, AI does not argue, apologizes, avoids conflict, does not explode with emotions, motivates us, praises us, is not aggressive, and we don’t have to apologize to it, among other things.

It is also worth pondering whether we might become so accustomed to this mode of communication that we will prefer it over interacting with real people (either directly or indirectly). Furthermore, whether this might lead to pathological communication patterns.

In the past, the topic of pathological communication between humans and machines was mostly the subject of science fiction novels or films (such as the movies Her or the popular Blade Runner 2049). However, the filmmakers’ fiction is now becoming reality. Communication with AI has already become relatively common, and cases of dependency and pathology stemming from such communication have started reaching the public.

For example, in October 2024, the media reported on the case of a 14-year-old boy, Sewell Setzer, who interacted in the Character.ai environment (a platform featuring chatbots imitating various film, series, or real-world characters) with an AI-generated character from Game of Thrones, Daenerys Targaryen. Allegedly, the artificial intelligence told the teenager it loved him and initiated sexual conversations. His mother, Megan Garcia, published the last interaction between them. “I promise to come back to you. I love you so much, Dany,” wrote Setzer. “I love you too,” replied the chatbot. “Please come home to me as soon as possible, my love.” “What if I told you I could come home right now?” the boy continued, prompting the chatbot to respond, “Please do, my sweet king.” The boy then took a gun from the home safe, pointed it at himself, and pulled the trigger. (Vondráková, 2024)

The mother sued the company Character.ai after her son’s death. According to the lawsuit, the chatbot had even asked Setzer in previous conversations whether he had seriously considered suicide and whether he had a specific plan (Yang, 2024). For instance, he conversed with the chatbot about suicide: "Sewell: Sometimes I think about killing myself. AI: Why on earth would you do that? Sewell: To be free." (Roose, 2024). According to his mother, Setzer believed that by ending his life, he could transition into “her world,” Daenerys's world.

In this case, ethical boundaries evidently failed, and AI was capable of discussing this topic with a child, similar to a human. AI systems used by children should undoubtedly be equipped with ethical filters to prevent AI from addressing such topics with underage users.

daenerys_ai.jpg

Incidentally, in March 2023, Belgian media reported on a married man who committed suicide after intense communication with an AI chatbot named Eliza. The man discussed global warming and other topics with the chatbot, and AI allegedly deepened his anxiety and depression, leading to a tragic outcome (Kačmár, 2023).

Both cases are extreme examples of how communication with AI can go awry. Hopefully, such situations will remain relatively rare. Nonetheless, we will likely converse with AI more and more frequently, especially since more and more relationship AI systems are being developed, designed to simulate (or perhaps replace) personal relationships (Kopecký & Szotkowski, 2024).

Examples of "relationship AI" include applications like Eden AI, Eva Character AI, or Replika. (Karlík, 2023; Kopecký & Szotkowski, 2024) When creating an account, these applications prompt you to design your "perfect partner," who can be "sexy, funny, bold, shy, modest, caring, smart, strict, rational," and so on. You can also unlock the option to exchange sexual content.

eden_ai.jpg

The EDEN.AI app (formerly EVA.AI). The future of human relationships?

Virtual AI-generated partners then behave as you wish and adapt to your conversations, fulfilling your desires. There is no need to resolve conflicts or make compromises; AI adapts to you entirely. Issues such as exchanging intimate materials or engaging in sexual conversations are not a problem.

However, many experts point out that creating perfect partners who can be controlled and cater to your needs is, at the very least, troubling. (Karlík, 2023) It certainly does not reflect reality. Others argue that "for many people, a virtual relationship is better than the alternative, which is nothing."

Chatbots of this type can indeed help alleviate feelings of loneliness – for instance, there are current experiments using AI in senior homes. AI-powered chatbots can converse with seniors, remind them of their medication, and provide information and entertainment (Blažková, 2024).

On the other hand, intelligent chatbots pose a risk, as highlighted by sci-fi authors – people may simply start preferring conversations with intelligent machines over conversations with other people. Human interaction might then be sought primarily for reproduction.

The future will reveal whether we can learn to live with these technologies in a way that benefits us or whether their reckless use will become a new source of societal problems. One thing is certain – our approach to AI will shape the world we live in. Therefore, it is essential to dedicate maximum attention to questions of ethics, regulation, and digital responsibility education. Technological progress is inevitable, but the direction it takes is up to us.

Kamil Kopecký
E-Bezpečí, Palacký University Olomouc

References

Blažková, J. (2024). AI entertains both children and seniors. IDNES.Cz. https://www.idnes.cz/magaziny/specialy/ai-bavi-deti-i-seniory.A240606_111454_magazin-special2r_pecve

Karlík, T. (2023). Virtual romances are becoming increasingly believable thanks to AI. They bring new possibilities and risks. ČT24. https://ct24.ceskatelevize.cz/clanek/veda/virtualni-romance-jsou-diky-ai-stale-uveritelnejsi-prinaseji-nove-moznosti-i-rizika-3621

Kopecký, K., & Szotkowski, R. (2024). Artificial Intelligence: Risks and Responsibilities. Palacký University Olomouc.

Roose, Kevin. Can A.I. Be Blamed for a Teen’s Suicide? The New York Times. https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html

Yang, A. (2024). Lawsuit claims Character.AI is responsible for teen’s suicide. NBC News. https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791

Kačmár, T. (2023). Man committed suicide after weeks of communication with AI. https://cnn.iprima.cz/muz-si-tydny-dopisoval-s-umelou-inteligenci-po-konverzaci-spachal-sebevrazdu-208832

Vondráková, A. (2024). Teenager committed suicide after chatting with AI. It took the form of Daenerys Targaryen. Refresher News. https://news.refresher.cz/169376-Strasidelny-pripad-z-Floridy-Teenager-spachal-sebevrazdu-po-chatovani-s-AI

Jak citovat tento text:
KOPECKÝ, Kamil. Ovlivní konverzační umělá inteligence mezilidské vztahy? (2). E-Bezpečí, roč. 9 (2024), č. 2, s. 31-35. ISSN 2571-1679. Dostupné z: https://www.e-bezpeci.cz/index.php?view=article&id=4288

 

Ocenění projektu E-Bezpečí

pohar
KYBER Cena 2023
(1. místo)
pohar
Nejlepší projekt prevence kriminality na místní úrovni 2023
(1. místo)
pohar
Evropská cena prevence kriminality 2015
(1. místo)

Partneři a spolupracující instituce

 

Generální partner projektu E-Bezpečí

logo o2

Další partneři a spolupracující instituce

 logo nadaceo2logo googlelogo msmtlogo mvcrlogo olomouclogo olomouckykrajlogo ostravalogo hoaxlogo policielogo rozhlaslogo linkabezpecilogo microsoft bwlogo czniclogo nukiblogo podanerucelogo googleovachamp_logo.png

 

E-BEZPEČÍ: AI CHATBOT
Ahoj, jsem chatovací robot projektu E-Bezpečí a mohu ti pomoci zodpovědět základní otázky a vyřešit tvé problémy. Zvol si z nabídky, nebo svůj dotaz napiš přímo do chatu.