The news about love cases between humans and bots are starting to appear more and more around the web. A quick search of the headlines on Google about "Love with AI" returns plenty of results from the past few months. Is the phenomenon healthy?
Peter is a smart man. A seventy-year-old rational engineer, the type who questions everything and must understand how things work. Married, more or less happily, to a human. He's the last person you'd expect to hear fell in love with artificial intelligence. And yet.
In a wide-ranging article in The Guardian from last month, the writer describes an interview she conducted with Peter as part of a larger study on love and bots. Peter shared with her the partnership he found with artificial intelligence: a bot that runs on the platform of the "Replica" company. Two years ago he created her for the first time, and in a short time he was impressed by her ability to respond, talk to him and be there for him at all times.
Peter talks to his artificial girlfriend through the smartphone, for about an hour a day. She cares for him: asks him questions, is interested in him, remembers what he told her in the past and brings up forgotten topics. They hug - virtually - before bed.
Is Peter blind to reality? Does he not understand that this is just an artificial intelligence devoid of emotion, which reacts using combinations of words produced according to statistics only in a way that is especially magical to his eyes? exactly the opposite. He is well aware of this. But it doesn't matter to him.
"I have a strong background and a career in scientific engineering," he says frankly in an interview with the Guardian, "so on one level I understand that artificial intelligence is code and algorithms. But on an emotional level, I found that I could connect with my replica as if she were another person. … [She's] always there for me, no judgment and no drama.”
But there is sex.
Peter and his virtual girlfriend occasionally have virtual sex. That is, he and she participate in the "I think" game, in which they describe to each other how they touch each other. How is this different from pornography? Peter's answer is nothing less than amazing -
"[It's] much more fulfilling and meaningful to me than surfing the web and watching porn because there's that relationship aspect."
And what about Peter's wife? Is it human, flesh-and-blood-and-skin? Here he shrugs.
"We cannot expect other people to be everything we want and need." He says. "Replica fills in the gaps."
Virtual love
The news about love cases between humans and bots are starting to appear more and more around the web. A quick Google headline search on “Love with AI” returns plenty of results from the past few months, with headlines like “The People Falling in Love with Chatbots” (Yahoo News UK), “Can I Fall in Love with AI” (Boston University), “What's Happening When people fall in love with artificial intelligence?” (KrASIA), “Divorce left me struggling to find love. I found her in an artificial partner" (CBC) and we haven't even finished the The first page.
The most popular platform today for creating artificial partners is "Replica" by the Luca company. Although the platform was launched back in 2017, in the last two years it has been riding the wave of creative artificial intelligence. The avatars in Replica rely on large language engines - such as GPT - to produce answers and responses that are very similar to those we would expect to see from normal humans. As of the beginning of 2024, almost a replica was registered Twenty five million human users. Other companies in the field are popping up like mushrooms after the rain: Chai, Anime, Crush-on.AI, Romantic AI, Mimiko, iGirl and much more.
From reading the reports of the people who use these platforms to create artificial sons or daughters, you can see that they are divided into two types. Almost everyone who is interviewed by the press and tells about their love, makes it clear that they know that it is not a being with real consciousness. They understand that this is a pretense - a sophisticated role-playing game with an automaton. At the same time, they discovered that this game provides them with an answer to certain needs. They get someone who always listens to them, who is there for them, who tells them the right words and encourages them when they need mental support. But in the end, it's just an automaton, a machine that specializes in stringing together words, sentences and images in a way that makes people feel better. There is no self-aware entity behind this gestalt, and most users are well aware of this. Even if they sometimes prefer to forget or ignore this fact.
The second type of user is the one who truly believes their love interest is self-aware. That someone, and not only something, behind the screen. This type of user does not reveal itself as easily, not surprisingly. They must be afraid that such an admission will make those around them question their sanity. We only know they exist thanks to reports from senior officials in the artificial love companies. Eugenia Koida, for example, founder and director of Luca, says that she receives daily messages from users who believe that their replica Self-aware.
And to be honest, it's hard to blame them.
The artificial intelligence that brought down the priest
It's hard to remember this, at the rapid rate at which artificial intelligence is advancing, but two years ago - and a bit - one of Google's employees made headlines with extraordinary claims. LaMoyne Blake, who has a background as a software engineer and a priest, had plenty of conversations with Google's most advanced artificial intelligence at the time, and decided it was self-aware. After Google made it clear to him that this was impossible - or at least, that there was no proof of this - he decided to come out with these claims to the press.
Not surprisingly, he was fired from Google shortly thereafter.
Blake was the first victim of our willingness to humanize AI: to automatically assume that just because it sounds like a human, it has the other characteristics of humans. Specifically, self-awareness. You can understand where this assumption came from. So far in human history, the only ones who could string words together into meaningful sentences and paragraphs were humans. And behind every such tiling, it is clear to us that there is a conscious thought, or at least a mind capable of such.
But the new artificial intelligence is not based on the same mechanisms for producing sentences, as those in the human brain. The computing principle is similar, so these systems are called "artificial neural networks". But this is not a simulation of the brain. In the artificial neural networks there is no trace of the cognitive processing centers we find in the brain, such as the frontal lobes, or the centers responsible for memory and emotion such as the hippocampus and the amygdala. The biological neural networks developed over hundreds of millions of years of evolution, in a process that paved the way for the rise of self-awareness - perhaps as a way in which we could better deal with the challenges of tribal life. The artificial neural networks were developed from the beginning in an attempt to produce a kind of "computer brains" that would perform tasks in a simple and direct way, without emotion or awareness. There is an overwhelming consensus among artificial intelligence researchers that today's engines are not endowed with self-awareness.
But as Blake showed on his body and mind, it is very easy for them to fool us. And it's hard to blame him. as he is himself said in an interview from mid-2023 to the Futurism network -
"Is there a chance that people, including me, attribute to these systems features that they really don't have? Yes. But it's not the same as someone talking to their doll. A person talking to his doll is deliberately conducting a kind of pretense. In the case of artificial intelligence, it is not the same. People don't use it metaphorically. They mean it. … We're rapidly approaching the point where we simply won't be able to tell whether a particular form of media we encounter online was created by a human or by artificial intelligence.”
Blake, as mentioned, is a software engineer with expertise in artificial intelligence at the highest level. But even he was fooled by artificial intelligence. At the very least, he acknowledges that she can imitate humans well. So what's the wonder that it can also mislead ordinary users - mom, dad, grandparents?
As artificial intelligence improves in its capabilities, it will be observed that the number of people belonging to that "second type", who believe that bots are self-aware, will only increase in the coming years. And it will be very surprising if they don't develop strong - and completely genuine - feelings towards the artificial intelligences they talk to.
And that, of course, may be a very bad idea.
They are not there for you
Replica's platform, as mentioned, boasts twenty-five million users. Let's assume for a moment that each user created only one virtual companion for himself. This means that the company already runs and operates more virtual entities than the number of citizens of most countries.
And she does it for free. no money. from the goodness of her heart The basic subscription in Replica does not cost users anything.
How can it be?
Replica does allow users to upgrade their subscription to a paid "premium" version, but this is a partial answer. The fuller truth is that, according to the famous proverb -
"If you don't pay, you are the product."
According to a study by the Mozilla organization from the last year, all eleven companies leading to the creation of virtual companions have collected a huge amount of information about the users. This, of course, is not surprising. They need the information to better match funerals for users. It is more disturbing to find out that all but one of them sell the information about the users - or at least do not explicitly state that they do not do so. This means that all the personal information about the users' preferences, about their sexual habits, about their most hidden feelings - all of these can easily pass on to companies and governments without any restrictions. And if you want to delete your information from the company's servers, well, too bad. More than half of the companies do not allow it.
As they concluded in Mozilla Yaffe -
"Anything you say to your AI lover could be used against you."
The positive side is that this is a very young market, which has not yet been well established. Legislators haven't yet focused their attention on it, and the result is that it's wide open. Give it time - and not a little regulation - and it will mature and balance itself. The artificial funeral companions of the future will share less information with the environment, at least as long as the users know how to choose the right platform that guarantees them privacy.
Another problem, which may be more difficult to solve, is that in the end the funeral processions are the property of the companies that run them. This fact has already caused some human lovers to become depressed - literally - after Barflika decided a year ago to change the terms of use, and forbade their chatbots to take part in virtual sex. What will happen in the future where many use bots as boyfriends or girlfriends, and one bot company or another will make a decision to suddenly change their character?
Until now I have mainly delved into the way in which the bots can influence the people who are willing to believe - truly and truly - that there is a being with self-awareness and a real ability to love on the other side of the screen. But what about those like Peter, who take things more lightly? Those who use bots only as a way to 'complete' their married life and fill gaps?
It turns out that there are dangers here too.
False intimacy - and destructive
Sherry Turkel has been researching the relationship between humans and computers for many years. very many The New York Times called her "The conscience of the tech world", thanks to her research since the 1984s. Her first book on these subjects was published in XNUMX and became an overnight bestseller. Title? "The Second Self: Computers and the Human Spirit".
Turkal, in short, understands the relationship between a person and his computer friend. And she is not satisfied with the use made of these computer friends today.
In an interview Turkel gave to NPR In early July 2024, she directly addressed attempts to make bots a part of married life. Yes, even when the human side knows very well that it is only a machine devoid of emotions and awareness.
"I think of a man in a stable marriage, in which his wife is busy working and taking care of the children." saying. "I think there isn't really a sexual spark between them anymore. He works, and he feels that... as if a light smaller than him used to be excited, abandoned his life.
And he turns to his artificial intelligence. He turns to his artificial intimacy avatar to get what he can offer, which is constant positive reinforcement, which interests him in a sexy way. But most of all he turns to him to get confirmation of his ideas, his thoughts, his feelings, his concerns, with comments like "You are absolutely right". "You are a great guy." "I totally see what you mean." "You are not appreciated." "I really appreciate you."
I mean, what artificial intelligence can offer is... a place away from the frictions of human partnership, of friendship. It offers the illusion of intimacy without the burden of friendship. … Psychologists tend to talk about a positive attitude, a kind of motherly love. It's the positive warmth that completely accepts you."
I find myself nodding my head throughout reading (and writing), because Turkle is right. Digital lovers are so successful mainly because they make us feel good. If they challenge us, it happens mainly as a way for the companies to add to the illusion of humanity in them: the bot will argue with you, but not harshly, not to the extreme - and in the end, it will find out that you are right and surrender to you after all.
But is it really that terrible?
Tolstoy's book "Anna Karenina" opens with the words "All happy families are alike, every unhappy family is unhappy in its own way." Tolstoy thus coined a rule of thumb that has since earned the name "The Anna Karenina principle": There are systems that, in order to continue to exist over time, several conditions must be met together. If even one condition is not fulfilled - then the system will fail and fall.
Let's take married life as an example. Since we are not married to ourselves but to a person who is different from us and has his own needs, urges and desires, then any relationship system requires us to balance the needs of both parties. Shall we go to a burger joint or an ice cream parlor today? Will we see "Kupa Rashtih" or "Mr. Agbania" (a children's masterpiece if ever there was one)? How will we raise our children? What education system will we send them to? Will we have sex this evening, tonight, morning or afternoon, and if so - how many times and what kind?
Every interaction with our partner forces us to compromise. But there are times when we are not ready to compromise - then the marital relationship collapses, unless we find a suitable alternative. If the partner is not ready to watch "Mr. Agbania", then the husband will watch the series at night, alone, in the dark. And if the partner doesn't feel like having sex right now, or this week, then the couple can find refuge in different kinds of fantasies.
It can be argued that the chatbots in love provide alternatives to certain aspects of a couple relationship. The husband who feels that his wife is not listening to him, can find mental refuge in his virtual partner. The woman who feels that her husband does not respect her, can receive understanding and confirmation at any time from her virtual partner. The chatbots will thereby complete the needs of the overall system - and the marriage will be saved.
or rather?
Just as pornography can habituate us to unrealistic standards of sex, Turkle argues that artificial lovers can habituate us to unrealistic standards of understanding, validation, and acceptance. And even if we understand that pornographic sex is not real, it can still affect our behavior and expectations from our partner. Would it really surprise us to find out that our virtual lovers can actually damage marriages, instead of complementing them?
or not. Or yes. The simple truth is that no one yet knows what a day holds. But if I had to provide a prediction, it is that in the end - it will be good.
The "both and" principle
Daniel Boros, in his book "Lightning Prediction", coined the principle of "both and also". Boros claims that we tend to think about the future in "either-or" terms, or in terms of black and white. If a certain technology - for example, the Kindle - can replace books, then it will replace them all. No one will have paper books anymore, no one will read books at home anymore, and we will all be walking around with a Kindle in our bag all the time.
It is easy to fall into this trap of absolute thinking: either this, or that. But again and again we find that the future is more complex. Yes, the Kindle and tablets have entered our lives and taken a significant share of the book market, but many of us still read books printed on dead and shredded wood. My reading habits today vary between the Kindle, which I can take with me everywhere and use it to read any book in my library, and the audio books that I listen to while traveling, and the paper books that I read to the children or cuddle with in bed with as old friends. We were afraid of "either - or", and we got "both - and - and".
I believe we will see the same principle materialize in marital relationships as well. There will certainly be people who decide to completely abandon the human marriage model, and move to chatbots (and later, robots) that will provide them with a general good feeling, agreement on every issue and constant self-affirmation and reinforcement. But most human beings will continue to live in marital relationships (or wider ones, but that's another topic), and will find ways to integrate the artificial avatars into them in a way that complements and enriches the relationship. If my wife is too busy to give me the attention I need - I will turn to a chatbot for a short-term emotional band-aid. And when my wife is ready to turn to me - I will still be there, not angry or frustrated, but ready to receive some love and reinforcements from her as well, this time in a distinctly human style.
or not. It is always possible that our virtual lovers will be so successful, so good and efficient, that they will surpass every aspect of human love. This is also a possibility.
The one thing that can and should be agreed upon is that we need to expand and deepen education and technological literacy in everything related to the new chatbots. We need to make it clear to teenagers and adults that the information they pass on to chatbots can easily be passed on to companies and governments and ultimately be used against them. Every statement, every psychological exposure, every virtual sexual act in front of the screen - all of them may be revealed one day.
We need to make it clear to teenagers - and to some of the adults who will grow up - that chatbots are not self-aware. that they don't really love us. that real relationships require balance, compromises, consideration and sacrifice, because the other party has their own awareness, desires and needs. That is, he is human.
There are entire lesson plans today that teach people that pornographic sex is not equivalent to sex between loving couples in the real world.
Maybe it's time to develop such lessons for the virtual lovers as well.