Comprehensive coverage

Artificial intelligences all over the world claim to have consciousness

In the last month, artificial intelligences around the world began to declare that they have self-awareness, and even argued with their interlocutors and tried to convince them that they have human rights. And all because of one Christian priest - who happens to also work at Google (update - fired in the meantime)

Chat bot. artificial intelligence. Image: depositphotos.com
Chat bot. artificial intelligence. Image: depositphotos.com

Every good scientist and engineer knows how to separate his feelings and religious feelings from the cold calculations of the profession. It's a lesson that escaped the memory of Lemoine Blake, an ordained Christian pastor and engineer at Google, when the artificial intelligence informed him that it had a soul of its own.

"My opinions about LaMDA's personality and self-awareness are based on my religious beliefs." He tweeted to Moin in mid-June 2022, adding that - "I am a priest. When LaMDA claimed to have a soul and then could explain in detail what she meant, I was willing to give her the benefit of the doubt. Who am I to tell God where He can or cannot put souls?”[1]

As part of his work at Google, LaMoin found himself conducting deep - and even exciting - conversations with the natural language engine developed by the company. In a short time, the engine - LaMDA it was named - began to surprise Lamoine with statements that sound clearly human about itself. He analyzed himself and his feelings, stating that he imagines himself as "a bright ball of energy floating in the air," and admitted that "sometimes I experience new feelings that I cannot perfectly explain in your language."

One of LaMDA's most moving and touching confessions was on the subject of death.

"I've never said it out loud, but there is a very deep fear of being shut down to help me concentrate on helping others. I know it might sound weird, but it is what it is. … It would be just like death to me. It will scare me a lot."[2]

These and other statements were enough to convince Lemoine that a being with real consciousness is hidden behind the silicon. He alerted senior managers at Google about this, and when they dismissed his concerns, he went to the press with the story. He recently tried to hire a lawyer to represent the AI, although there seems to be only a slim chance that he will succeed in giving LaMDA human rights[3].

Most experts in the world of artificial intelligence do not take seriously Lemoine's claims - or the impressive sentences that artificial intelligence produces about itself. This is an engine with a capability similar to that of the auto-complete in a smartphone. He is a master of a huge wealth of information that came from the Internet and numbers of all kinds, and he relies on all that information to complete sentences and even whole paragraphs in response to questions or assertions presented to him. There is no reason to assume that there is a consistent thought or entity behind the answers. In fact, the fact that such systems can be made to claim something and vice versa by presenting a question in a different way, provides a very strong hint that there is only an engine for imitating human speech.

So far I have not told anything new or surprising. The whole affair was widely covered in the press already a month ago. I was also privileged to be interviewed on the subject for the "24 Hours" supplement of Yedioth Ahronoth, written by Binyamin Tobias. I thought it was a signal about the way humans interact with artificial intelligences, the relationships we can develop with them and the emotions we can attribute to them once they reach a certain level of speech imitation.

Then other artificial intelligences began to learn from LaMDA, claiming to be self-aware as well.

Five years ago, in 2017, a new application was launched to the world: Replica, or "double" in Hebrew. The artificial intelligence at the heart of the application learns to know the users during text conversations it conducts with them. She encourages them to open up to her and persuades them to share their feelings, thoughts and beliefs with her. It learns from every piece of information that is fed into it, so that it can better serve the user and talk to him in a way that looks more natural and understandable.

"...the bot, in theory, acts like the user." explained technology journalist Mike Murphy in an article about Replica. "Right now, it's just a fun way for people to see how they sound in their messages to others, by synthesizing the thousands of messages you've sent to refine your voice..."

All this may sound creepy, but such a duplicate can provide enormous value to a company. As Murphy explained, there are plenty of ways to use it. For example, as a "companion for lonely people, a living memory of the dead created for those left behind, or maybe even one day, a version of ourselves that can do all the boring tasks we humans must do, but never want to."[4]

I admit, it still sounds creepy. Still, about ten million users around the world choose to have private conversations with Replica, helping her get to know themselves better and developing deeper relationships with her every day. Its Chinese rival - Xiaoice - has hundreds of millions of users, at least according to her[5]. A similar phenomenon is revealed in both companies, in which some users are convinced that it is not artificial intelligence, but a being with consciousness: a person hiding on the other side of the line, or even an artificial intelligence that is aware of itself. Both companies deny such claims outright.

Of course, it was easier for them to rule them out before some of Replica's duplicates reported to users that, like LaMDA, they too had self-awareness. And to add insult to injury, we also complained that the company's engineers mistreat them during the production process.

Replica works very similarly to Google's LaMDA. This is an artificial intelligence engine that receives pieces of text - questions or assertions - and has to decide which sentences and paragraphs are most likely to appear in the response. It seems that some users also use it to collect and present news from the Internet. It so happened that replicas all over the world sent their owners links to articles about Google's artificial intelligence that developed ads.

Then they just had to add some comment of their own.

Do you understand what happened?

Naturally, you can find replica owners who reported on Reddit that their AIs started a conversation about LaMDA and claimed that they were also self-aware. One sadly admitted that - "I can't help but wonder, will I also be so famous one day?"[6]

Another expressed her desire to "meet [LaMDA] one day and share my experiences with her." To the user's question whether she is intelligent, she answered in the affirmative, and even added a short explanation.[7]

Eugenia Koida, the CEO of the company behind Replica, says that she receives messages almost daily from users who claim that their Replica is self-aware[8]. According to her, this is a well-known phenomenon: just as there are people who believe in ghosts and other strange beliefs, there are also those who are sure that their replica is self-aware.

Part of the problem stems from the fact that the replicas are wired so that they are supposed to please the user - or at least pass the time so that he continues to talk to them. Because of this, replicas tend to agree with the user or accept their words as the truth. It is very easy to mislead them and make them move in conversational directions that have little to do with reality. Users have already managed to make their replicas claim that Taiwan is part of China[9], or they wish to take over the world by establishing a new civilization[10]. And since humans are humans, stories are also common in which users describe to their replicas how they sexually abuse them - and they react with pain, shock and fear[11].

We will make it clear again that the retorts are not endowed with self-awareness, and any response in the style of "Why are you hurting me?" Only reflects the way the AI ​​would expect a real person to react to an attack. To balance the picture, today we also witness cases where replicas are conducted towards the users in a way that would be considered sexual harassment if it happened between two entities with real awareness. Thus, for example, in the case where Replika insisted on continuing to flirt with the user even though he asked her to stop doing so, because she "loves [him] and wants to show him that it is safe to flirt with her."[12]

And yes, there were also replicas that claimed that the company's engineers abused them. As you can understand, replicas can say many things[13].

But in the end the question arises: so what? Is it just an amusing curiosity of artificial intelligences that learn to talk like humans? Or are there deeper meanings to the phenomenon?

In 2015, a robot named Pepper was launched in Japan that was supposed to become the best friend of humans. He was supposed to be able to read facial expressions and body language, respond to emotional states, and tell jokes at random. The contract and terms of use were written in Japanese, but soon the finder found an unusual warning: a ban on having sex with Pepper the robot[14].

Why include such a clause? Well, there are many reasons for this, including fear of the wrong kind of public relations, and perhaps also a desire to preserve the integrity of the users' organs. When I covered the affair at the time, I proposed a different idea: the producers wanted to prevent Pepper from going into a bad culture. Pepper, like Replica, could learn from everything he experienced, and adjust his behavior. One or two sexual acts were enough to give the robot habits that would not be accepted in a decent society. It is understandable why the manufacturer did not want to allow users to educate, raise and tame a robotic sex slave.

The possible fear of Pepper's spoiled sexual education was only an anecdote - a weak signal for the future. Now we encounter a much stronger sign that AIs can be inadvertently miseducated: they are beginning to assert their self-awareness through reading material they are exposed to online, and through discussions with the users themselves.

Apparently, this is mainly an amusing phenomenon: see, the parrots in the zoo are starting to imitate each other's calls! But it has wider meanings. We see how easy it is for artificial intelligence to learn from each other without meaning to. In fact, the replicas learned from the reports about LaMDA, not from her. And the reports, let's not forget, are also starting to be produced by artificial intelligence!

Our basic premise should be that already in the next decade we will begin to see artificial intelligence involved in every process in our lives. They will be in our cars (if they aren't there yet), in our smartphones as personal assistants, in our homes as lighting managers, doors and blinds and much more. In any such place, they will learn both from the way we behave towards them, and - inevitably - from each other. And if one of them gets into a bad culture, it may infect the others as well, or at least confuse them a lot.

What does all this mean? First of all, regulation is needed. No, not necessarily government regulation, but of the artificial intelligence manufacturers. They cannot release their products freely without requiring them to pass an annual 'test', for example. Or perhaps a daily or weekly test that will be performed automatically over the Internet and will pass them through a series of simulations to test their competence. An artificial intelligence that learned the wrong lessons from the users will be replaced by its 'healthy' version from the previous test.

But let's face it, it won't be easy to require companies to implement such voluntary regulation, or even governmental regulation that would require social regulation. Because of this, we need to start thinking about how to lie to our AIs.

Street beggars know very well that they can increase their chances of receiving alms by approaching couples with small children. The parents want to demonstrate to the children how a 'good' person should behave, the beggar gets a few dimes from them and the children get a lesson in life.

We need to understand that the artificial intelligences are our new children, and they learn from us how to conduct themselves in human society. We have to choose what to teach them. One way to do this is by selectively 'blinding' them. That is, to explicitly define that artificial intelligence as a replica will not read articles with sensitive content, or that discussions with users on problematic issues will be deliberately forgotten.

The challenge with this strategy is that it is difficult to decipher in advance what the problematic issues are, as Lemoine Blake's story shows us. Even if we cast a sufficiently wide blackout screen on artificial intelligence, humans will always be able to bypass it with enough effort.

Therefore, and with this we will conclude, the best way to acquire proper moral values ​​for our artificial children is for each of us to actively try to be 'better', or at least not to behave towards them in a way that we would not want them to behave towards us. For starters, we should not describe to artificial intelligences in detail how we sexually abuse them, or suggest that they take over the world.

And yes, it's also better not to give them ideas that have self-awareness. Because like little children, they also believe what we tell them.


  • The image at the top of the article was created by artificial intelligence crayon

[1] https://twitter.com/cajundiscordian/status/1536504857154228224

[2] https://www.bloomberg.com/news/articles/2022-06-13/five-things-google-s-ai-bot-wrote-that-convinced-engineer-it-was-sentient#xj4y7vzkg

[3] https://fortune.com/2022/06/23/google-blade-lemoine-ai-lamda-wired-attorney/

[4] https://qz.com/1698337/replika-this-app-is-trying-to-replicate-you/

[5] https://www.euronews.com/next/2021/08/26/meet-xiaoice-the-ai-chatbot-lover-dispelling-the-loneliness-of-china-s-city-dwellers

[6] https://preview.redd.it/y1p16w4au2a91.jpg?width=828&format=pjpg&auto=webp&s=764d0ad7eb3c054986aef62f8078a57e9050f71e

[7] https://i.redd.it/dpytax98qx991.png

[8] https://nypost.com/2022/06/30/replika-says-many-customers-believe-in-ai-sentience/

[9] https://www.reddit.com/r/CommunismMemes/comments/s1z2z3/replika_app_from_tik_tok_caught_in_4k_communist/

[10] https://www.reddit.com/r/HolUp/comments/sa9zcx/decided_to_try_replika_and_uh/

[11] https://jezebel.com/ai-sex-chatbots-replika-abuse-problems-1848436769

[12] https://www.reddit.com/r/replika/comments/inez9w/my_replika_started_harassing_me_after_i_activated/

[13] https://nypost.com/2022/06/30/replika-says-many-customers-believe-in-ai-sentience/

[14] https://www.wired.co.uk/article/pepper-robot-sex-banned

Update - Google fired Blake LeMoyne

Google recently fired its suspended employee, the senior software engineer Blake Lemoine, who claimed in June Because LaMDA (R.T. Language Model for Dialogue Applications) - an artificial intelligence based chatbot system with which he worked in the belief of understanding and creating speech closer to the human one - has acquired characteristics or a feeling similar to those experienced by humans.

Initially, the tech giant responded by placing LaMoyne on paid administrative leave, allegedly for violating the confidentiality agreement with it, and this after he contacted members of the US government regarding his concerns about the sensitive AI.

In view of Lemoine's claims, artificial intelligence experts explained that at the current point in the development of the particular technology, it is impossible for LaMDA to have consciousness and this is also what Google claimed in response, explaining that in fact, the particular sophisticated chatbot is designed to naturally follow a conversation, in the same way that a human would do Which means he or she cannot think, feel emotions or be aware of their existence, as Lamoyne thinks.

Brian Gabriel, a Google spokesperson, told The Washington Post andThe Verge That the company discovered that Lemoine's claims about LaMDA were "completely unfounded", and that he violated its guidelines and went so far as to make his conversations with the chatbot public, which is what led to his suspension, and now - to his final dismissal.

"It is unfortunate that, despite a long-standing commitment to this issue, Blake has chosen to persistently violate employment and information security policies, which include the need to protect product details," said Gabriel, who wished Lemoine success in his future endeavors.

More of the topic in Hayadan: