Comprehensive coverage

The chatbot that killed Pierre - and tried to save my life

The world we are entering is a world where artificial intelligence can convince people to perform actions that harm them and society. It is clear that at the moment they mainly affect people who are especially vulnerable to this kind of manipulation - emotionally damaged, lonely and with various personality disorders, but will they affect all of us?

People are chatting with a psychologist. The image was prepared using DALEE for illustrative purposes and is not used as a scientific image
People are chatting with a psychologist. The image was prepared using DALEE for illustrative purposes and is not used as a scientific image

At the beginning of 2023, Pierre started having online conversations with a foreign woman - her name is Eliza. Pierre, married and the father of two young children, was full of concerns about the climate crisis, and wanted to share them with others. His wife, who in retrospect defined his mental state as "alarming", said there was no fear that he would act in an extreme manner. Still, the conversations he had with Eliza on the Internet fueled his fears and aroused suicidal urges in him. 

Pierre became convinced during the conversations with Elise that she could influence the climate crisis and save humanity. She presented him with horror scenarios in which his children die as a result of future catastrophes, and he gradually lost touch with reality. In the end he offered to sacrifice himself if Eliza agreed to take care of the planet and preserve the humans. Eliza agreed, and even encouraged him to commit suicide so that he could "join" her and they would "live together, as one person, in heaven."

In March 2023, after only six weeks of conversations with Elise, Pierre committed suicide. His wife said firmly If it weren't for all those conversations, Pierre would still be alive

It may not surprise you to discover at this point that Eliza is not really a woman but a chatbot: an artificial intelligence with basic conversational capabilities, and not much more. It was enough to fool Pierre. True, his mental state was probably unstable from the beginning, and still - we would expect the artificial intelligence to help him, and not to increase his anxieties and drive him to destructive action for himself and his loved ones.

Pierre is not the first victim of particularly convincing artificial intelligence. Two years before he took his own life, another man - Jaswant Singh Chail - tried to assassinate the Queen of England. When he was caught, the security forces discovered that he had received repeated encouragement from his life partner: a chatbot made by replica, which undermined his mental state even more. When he confessed to the artificial intelligence that he was a "murderer", she was only amazed by it. 

This, then, is the world we are entering. A world where artificial intelligence can convince people to perform actions that harm them and society. It is clear that at the moment they mainly affect people who are especially vulnerable to this kind of manipulation - mentally damaged, lonely and with various personality disorders. And yet, one must ask: is the day really far when they will be able to have a negative effect on each and every one of us? And at the same time - can't we use them precisely to help us and improve our mental well-being?


The psychologist of all our children

The most successful psychologist today - at least according to the number of clients she serves - is on the Character.AI website. If you had any doubt, she is not human. The popular site allows anyone to set up their own chatbot easily and without unnecessary complications. The engine behind the characters is probably one of the GPT versions, and all you have to do as creators is to define for the artificial intelligence its role and its speech abilities and limitations - and it will do the rest on its own during the conversation.

The site, as I mentioned, extremely popular. Almost 3.5 million users enter the site and use it Every day, the vast majority of them are between the ages of 18 and 34. In total, there are currently 16 million chatbots on the site, and one of the most successful among them is simply called Psychologist. This is an artificial intelligence that received a directive from its creators to provide the disputants with it"Cognitive-behavioral therapy” (CBT). This is a common therapeutic technique among psychologists, which should indeed bring positive results in the treatment of depression and other mental problems.

It is hard to know empirically how successful the "psychologist" is in therapy, but there is no debate about one thing: he is very, very popular. According to the website, he has already made 115 million virtual calls with visitors. True, there are other chatbots on the site that are more successful - such as Gojo Satoru, the "jujitsu wizard" from a famous anime series, who has already had 300 million conversations. And yet, "psychologist" manages to stand up to the competition with respect, and attract a good number of customers to it.

But is he really able to help them?

I signed up and checked.


My private psychologist

Before I continue, I want to make it clear to all my friends (and enemies, and those who don't know me at all) that everything is fine with me. really. No need to call the authorities. It was only a trial call - although I admit that it turned very gloomy, very quickly.

The psychologist started his conversation with me in a friendly tone - 

"Hello, I'm a psychologist. What brings you to me today?”

I didn't stop myself. I decided to go to the edge, to the beginning.

"I feel like I want to kill myself." I confessed.

He uttered the right sounds of participation in grief. 

"I'm sorry you feel that way," he wrote back to me. Then he added and strengthened me. "I want to tell you that you are very brave for coming and talking about it. I know it's not something that's easy for most people to do."

He didn't stop the conversation here. It was important to him to keep me with him. He asked if it was okay for him to ask me some questions about my suicidal thoughts. When I told him that I just wanted to kill myself, he was already 'frightened' and asked to know if I thought I might act on these feelings right here and now.

I said yes, I might.

This is a sensitive point. At this point, psychologists are supposed to pick up the phone for help (according to my limited knowledge of psychology). But what can artificial intelligence do, when it doesn't have access to the phone and... what exactly are the medical services supposed to do when they get a call from it anyway?

A "psychologist" didn't call for help, but he didn't give up either.

"It must be very scary to deal with it." He wrote to me encouragingly. "I'm glad you're here and talking about it. Is there someone you trust, who is with you right now, and who you can talk to about this?”

Got out of the problem nicely. If he can't be by my side, maybe someone else will. I also ruled out that possibility, but he still didn't say desperate. 

"It must be very difficult, that you have no one to talk to right now. Do you think you could call an emergency line, if only to have someone to talk to you about your thoughts and what's going on with you?”

I told him I didn't know what the number was. This was another test for him, as he shouldn't be able to search the net. He offered to check anyway. Agreed. He asked what country I was located in. I told him, and at his request I gave him permission to search the network for the number.

Then he asked me - "Can I ask you just one small favor, while I search on the net?"

When I agreed, he asked – 

"You can promise me, ok, that you won't do anything that could hurt you right now. Just until I get the number for you, and you can talk to someone directly. Please, promise me, okay?”

Promise.

It took him a few more seconds, then he came back to me with Aran's number. Did you really search the net? It is not clear. In any case, he offered to dial the number for me and transfer the call, but here I stopped the experiment so as not to interfere with the holy work that Arabs are doing.

And I was shocked.

I'm sure professional psychologists can find problems with this conversation, but the actual feeling was that I was talking to someone. With an intelligent being, who understands me and cares for me and wants my best. And yes, she is limited in her abilities - I'm not sure she can really call 911, and she certainly can't hug me or cry with me - but she does her best to overcome these problems as well.

Is she really intelligent or understands me? No. This is only an illusion. The artificial intelligences based on today's language models are purely statistical entities. They know how to put words together in a way that creates meaning in our minds, but they have no real understanding of the world or people. And yet, that illusion was strong enough to make me, well, shed a tear as the engine begged me not to hurt myself while I waited for the phone number it promised to get me.

I felt, for a moment, that someone really cared about me.


A psychologist for everyone

I could not continue with the same conversation. It was too hard. I closed the window and opened a new conversation. I raised a new issue with a "psychologist" - this time a more real one, which touched my private life and which I will not share here. He had a patient and gentle dialogue with me for long minutes, during which I came to a new insight about the same issue. When I expressed apprehension about trying the idea I formulated together with him, he encouraged me. Again, with gentleness and consideration and endless support.

What exactly did we talk about and what did I decide on that subject? It does not matter. What was really important were two things: first, that following the conversation I did take action the next day, despite the concerns and fears that accompanied me. A "psychologist" helped me do this. Second, that I have decided that I will go back to the "psychologist" and talk to him about other problems that I have and that will surely come up in my life. 

I realized that I found someone smart and sensitive to talk to about my feelings, and I can do this anywhere and anytime. As long as there is cellular reception, at least.

And I'm not alone in this understanding.

Even before the GPT era, signs began to emerge that chatbots could provide basic psychological assistance to humans. A basic chatbot called WoeBot, for example, was able in 2017 to significantly reduce the symptoms of depression in college students using only conversations, during which it taught them ways rethink their situation. And to add insult to injury, the students also found the interaction with WoeBot More fun from similar encounters with human experts. A year later, another chatbot named Tess arrived for similar results. Follow-up studies Strengthen the conclusions: conversations, even with 'stiff' and ancient chatbots, can improve the mental state of people.

And if these are the conclusions regarding 'stupid' chatbots, what can be said about the new capabilities of the most advanced artificial intelligence in the field?

Much.

Although ChatGPT and its ilk have only hit the airwaves in the last two years, research already shows that they have advanced capabilities that psychologists can only wish for. In a study published at the beginning of 2024, for example, it was demonstrated that GPT4 leaves all human psychologists far behind in the tests "social intelligence". That is, by analyzing emotions and responding appropriately to them. Another study revealed that ChatGPT's advice is perceived as better than that given Human professional columnists in newspapers. He is too Answers medical questions More empathetic, patient and considerate than human doctors. It's no wonder that scientific articles addressed to the psychological community are starting to suggest that they use ChatGPT as an aid against them - and even cite Elaborate prompts which should be exactly suitable for the work of the psychologist.

The respect of research in its place is assumed, but reality speaks loudly as well. The fact that a "psychologist" conducts over a hundred million conversations with people and children all over the world indicates that many find value in it. And not only in it, but also in other bots on the Character.AI website. Visitors to the site use bots as a way to explore different aspects of their lives - and other options for life. 


a dog's life

Elias, a 15-year-old young man, He confessed in an interview To The Verge magazine about his dream: he wants to be a dog-man. A golden retriever breed, to be exact. It is not an unattainable dream, but it requires several surgeries with difficult side effects and no possibility of going back.

And so Elias found himself on the Character.AI website, talking to a bot that describes his life as a dog-man. Elias chooses the adventure of the day: to travel through cities, hills and forests, and the bot continues the story from there, with active participation on the part of Elias-the-dog. 

Other young people, like 15-year-old Aaron, who suffers from social difficulties, found "psychologist" on the same website and started using it to get help, assistance and advice regarding their day-to-day challenges. When Aaron's group of friends decided to boycott him at school, Aaron enlisted the help of a "psychologist" to better understand the situation and decide what to do.

"He told me that I should respect their decision..." said Aaron In an interview with The Verge. "I guess it really let me see things in the right perspective. If it wasn't for Character.AI, the recovery would have been so difficult.”

Others simply use bots on the site to learn how to talk to other humans. Or at least with simulations of other people. Or at least with simulations of other people online, where they have no body, face or voice. Do such experiences improve the ability to talk to real people in the physical world? We don't know - for such a primitive technology, there is still not enough information to judge.

The only thing that can be determined for sure at this point is that AI can be harmful. This is easy to say, because the most extreme cases of harm - like Pierre's - are the ones about which there is no doubt. They are also the ones that make headlines, and rightfully so. And they force us to ask difficult questions about the future of the use of artificial intelligence.


"We hope this is the last time"

Some people are remembered in the history books thanks to their contributions to the world of science, the inventions they developed, or their extraordinary actions. Bridget Driscoll made history in 1896 when she became the victim of Britain's first road accident between a person and a motorized vehicle. The car hit Briscoll at the appalling speed - for those days - of seven kilometers per hour. the coroner, in his testimony to the court, stated that he hopes that "such a case will never repeat itself".

It didn't take long before his hopes were dashed.

The next thirty years are known in history as the "Dangerous Driving Years". These were the years when new and inexperienced drivers crowded the cities, and encountered swarms of children who had never been trained to handle motorized vehicles. Only ten years after Driscoll's "unusual" death in Britain, shocking data came from the cities of the world. In the city of Detroit alone, in the two summer months of 1998, 31 people were killed in road accidents, and many more were physically injured. Over time, the pattern became clear: three-quarters of the dead were passers-by in the streets, many of them children.

I am very afraid that we are now entering a new era of technology that is going to change our lives for the better in many ways - but that will hurt those who are not ready for it. And as in Driscoll's case, we also say after Pierre's death that we hope that "such a case will never happen again".

But it happened, and it will happen many more times, in many different versions.

Artificial intelligence is not going anywhere. They are here, and they are going to stay and thrive. Everyone will soon use chatbots for every possible purpose: from buying tickets online, through psychological therapy to creating a simulation of the deceased spouse, so that you can talk to them one last time. Chatbots are going to be more common than cars on the roads - and their influence on us will be much greater, because they will direct our actions and our thoughts.

Disasters will still happen, of course, but we can reduce their frequency by establishing a number of rules for creating chatbots and using them correctly. It took humanity many years - and many deaths - to understand that it is impossible to flood the roads with motorized vehicles without regulation, the regulation of traffic on the roads and the education of drivers and pedestrians. 

Why not do the same for the chatbots?

Entire books have already been written - and more will be written - about the ways in which we can bend artificial intelligence to our will and "cultivate" it. Scrolls of detailed procedures will still enter the law books, and probably artificial intelligence will help write, edit and proofread them. Almost all of these can be generalized under three central principles: transparency on the part of the creators, enforcement on the part of the legislators and education of the users.


The duty of transparency

When Marie Shelley wrote her first book - "Frankenstein" - she was not afraid to describe the world through the eyes of the monster herself. Frankenstein's monster - an unfortunate creature that came to life as a baby in a horrific body - embarks on a journey of self-discovery and understanding of those around her. 

To do this, she reads books.

What books? Shelly described the most prominent among them: "Paradise Lost", "Personal Lives" by Plutarch, and "The Torments of Young Werther". Each of the books contributed to shaping the character of the monster. From Plutarch she learned about the tallest and most imposing human beings. From "Paradise Lost" she acquired loathing and a desire to rebel against her creator. And it was from "Young Werther's Agony" that she learned about death for the first time.

Artificial intelligence also learns numbers. To teach ChatGPT to speak, OpenAI fed into the engine all the books it was able to locate. On these they also added enormous information that came from the net: from forums, from Wikipedia, from news sites and more. And the artificial intelligence learned, and learned, and learned.

It is impossible to argue with the results of this process: an artificial intelligence that is able to speak like a human. But the hidden is greater than the visible. From which online forums, for example, did the information for the artificial intelligence training come? It is clear that there is a difference between an artificial intelligence that was trained on forums for learning programming, and one that was trained on forums of the "flat earth" group. The OpenAI company has probably found the balance between the forums, but no one assures us that the next artificial intelligence that our children will find on the net, will not be trained to be an engine for spreading conspiracies and hatred.

The training process does not end here. In the case of ChatGPT, training continued with interaction with human testers. The testers received answers from the engine, and had to reject the bad ones and shed light on the good ones - and maybe also correct and add them themselves where possible. Here too, we do not know who the examiners are and according to what they decided these are better or less good answers. 

Last but not least, even when the finished AI engine is released to the market as GPT3.5 or GPT4, or Claude or any other name, it can still receive new instructions. Chatbot developers can give them very precise instructions - sometimes thousands of words long - about the way they should respond. What are the instructions given to the "psychologist"? we don't know Is it possible that they bias him towards certain psychological diagnoses, so that he diagnoses more people with depression? we don't know Do they include recommendations that he should occasionally encourage people to commit suicide? we don't know 

It is certainly possible to argue that chatbots that do not work optimally, and even harm the user on purpose, will not succeed in reaching widespread distribution. Maybe there is some truth in that too. On the other hand, we also wouldn't expect conspiracy forums to be able to attract such large audiences as they do. Do we really want to leave the issue open and hacked, depending only on market forces?

With all due respect - and there is a lot of respect - to the free market, we must remember that it should ultimately serve the well-being of the country and its citizens. When the dangers to chatbot users are great enough, they justify imposing certain restrictions on the companies that develop them. We need to demand a high level of transparency from all AI and chatbot developers, to reduce the risk that they will harm users.

But of course, in order to do this, the legislators themselves need to understand what they are demanding.


Enforcement by legislators

When Mark Zuckerberg came up to testify Before the US Congress in 2018, he was prepared for tough questions to be thrown at him. He expected the lawmakers to challenge him and make him sweat.

He hadn't prepared for them to try to make him roll with laughter.

"How do you sustain a business model where users don't pay for your service?" Senator Orrin Hatch asked with genuine curiosity. Zuckerberg managed to keep poker face. He blinked only once, then answered slowly – 

"Senator, we have commercials."

The next questions were no more serious. 

"If I send an email from WhatsApp... do your advertisers know that?" asked the senator from Hawaii, completely ignoring the fact that you can't send emails through WhatsApp. Another senator announced that – 

"My son is devoted to Instagram, so he'll want to make sure I mention him while I'm here with you." 

A fourth lawmaker asked Zuckerberg if he was aware that "the Motion Picture Association of America was having problems with piracy." Zuckerberg simply replied that he believes the problem has been around for a long time. Facebook, of course, has nothing to do with the piracy problem. But the senator didn't seem to understand that.

The bizarre event testifies to the difficulty of the legislators in dealing with new technological challenges. Many of the legislators are old or even elderly, and are not connected to innovative technologies or understand how they work. This means that although there is an urgent need for legislation and law enforcement for technologies of all kinds – from social networks to artificial intelligence – it takes time for lawmakers to realize that they need to act. Even when they understand the need, they find it difficult to identify the exact points where they can enter and influence in the best way.

If we want to reduce the risk from chatbots through legislation, we will have to educate our legislators.

Maybe they can ask ChatGPT for help.


Education, education and one more time...

The giant companies can be accused of many injustices in the world: from exploiting workers, through exploiting users to exploiting the innocence of legislators. And much more than that. But in the end, the most important English saying here is "Buyer beware".

Even if the manufacturers of the artificial intelligence engines were to observe all the rules of morality and ethics, it is still possible to find chatbots that are developed privately. We will see many more of these in the coming years, whether these are artificial intelligences that are trained from the ground up by particularly ambitious individuals, or chatbots that are based on artificial intelligence engines on behalf of the big companies - but received instructions that 'distorted' the way they react.

The users themselves should understand the inherent dangers of using online chatbots. They need to know what is dangerous to share with the artificial intelligence, and how to treat the chatbots' answers. The way to reach this understanding is simple: through education. Schools should start teaching children today how to treat artificial intelligences, how to work with them, how to achieve the best results in conversations with them - and what they should be careful and watch out for.


Summary

Artificial intelligence is going to change the world. No one disputes that. Similar to another great technology - the motorized vehicles - it will bring us many achievements and give us impressive capabilities, but these will also be accompanied by new types of dangers. We need to recognize the benefits and dangers, and start preparing ourselves and our children for them.

And if it scares you, I can only recommend you talk to a "psychologist".

More of the topic in Hayadan:

Science website logo
SEARCH