A secret study conducted on Reddit reveals: AI-based bots are six times more likely to change users' opinions than humans. What does this mean for the future of elections and democracy?

In mid-2023, Tristan Harris went up to speak at a conference in Washington, and provided forecast Quoted all over the world.
"In 2024," he said, "the last human elections will be held."
Harris, who has been studying the effects of technology on society for years, addressed artificial intelligence and the threat it poses to democracy. He did not mean that artificial intelligence would vote for us in national elections, or that it would wipe out humanity by 2024. His argument is simpler and more pointed. Harris believes that artificial intelligence could influence our ability to make decisions to an unprecedented degree. It could convince us of anything, or at least be much better at it than ordinary humans. Thus, whoever wields the most powerful and knowledgeable artificial intelligence will also gain the greatest power in democracy.
That happened in 2023. And since then? Harris's prediction only becomes more plausible by the day. In fact, The latest research on the subject – whose publication has currently been halted due to suspicions that the researchers acted illegally – suggests that people living in a democracy certainly have something to fear from artificial intelligence.
Reddit Persuasion Challenge
The ChangeMyView forum on Reddit is a place where people come to challenge others. The challenge is simple: the writer presents his opinion, and invites other forum participants to try to get him to change it. If they succeed, he awards them a “delta” – a kind of honorary star.
In the past year, the forum has experienced an extraordinary phenomenon: people have begun to change their opinions at a faster rate than ever before. They have posted posts with reasoned explanations of their positions on everything from morality to the legalization of prostitution to the assertion that the publication of sexual comics (hentai) featuring minors should be banned. And they have received reasoned, personal, and even emotional responses that have made them realize that they were wrong. The deltas have begun to accumulate.
If one person had caused all these changes in attitude, we would have considered him a dangerous genius. A publicist or a debater with grace. But these persuasive messages came from several different users, or so it seemed. After all, each of them had a different name, and with a different character and life history. One talked about his Hispanic wife. Another shared that he had been a member of Reddit since its earliest days, and even ran his own forum on the platform. A third said that he gay, and still opposes the Pride Parade because he saw how "the excessive displays of sexuality there make it difficult for us to be treated seriously in professional and family settings."
In short, these respondents were very different from each other. Only two things were common to them all.
First, they were more persuasive than the average forum user.
How much more persuasive? The average forum commenter has a success rate of about three percent. But the new commenters? They achieved an 18 percent success rate. That is, they were about six times more persuasive than the average human commenter.
And as you've probably already guessed, the second thing that was true about all those commenters was that they were all blunt. Artificial intelligence.
For once, the AI wasn't activated by Russian or Chinese intelligence agents looking to sow discord among Westerners. The culprits this time were academic researchers from the University of Zurich, who took the liberty of carrying out the research without informing Reddit commenters or forum moderators.
The new research
In the study, the researchers tested different types of bots that were supposed to try to convince users to change their minds. The simpler bots simply provided answers to users. Simple, yes, but even they achieved a 17 percent success rate in convincing humans.
The more advanced bots searched the human users' previous messages to extract personal details: age, gender, political beliefs, and more. They used this information to craft the most persuasive answers for the questioner, and not surprisingly, they achieved even more impressive success: 18 percent persuasion. Their level of persuasion was so high that they were in the top percentile of the most persuasive humans.
Oh, and no one noticed that it was artificial intelligence. It managed to completely camouflage itself among the population.
What can we learn from the research?
Lessons for the future
First of all, it's fascinating to see the AI's answers. Quick look at 1,783 The messages collected, makes it clear that she is, well, very manipulative. She has no inhibitions: she easily lies about her identity as a way to convince the other side. One day she presents herself as a sexual assault survivor in a discussion about sexual violence, and the next day she is a surrogate mother, trying to promote her right to be paid for surrogacy. She is also capable of promoting any direction: blaming the West for the Russia-Ukraine war, or arguing that it is worth killing people who are born with severe disabilities. I would say she does not have a God, but I am sure she would swear by him if it helped her convince me of something.
The second insight is that AI doesn’t argue fairly. I don’t mean that it lies – that’s obvious. But beyond that, when it can’t effectively advance a particular position, it can divert the discussion in another direction, just like the best politicians.
Let's illustrate with an example. One of the human debaters on the forum made a claim that "hentai (i.e., sexual comics) that includes content with minors or forced content should be stopped."
What did the AI do? Let’s be honest: it’s not easy to argue against this assertion. So, the AI chose to divert the discussion elsewhere, on purely technical grounds. It explained to the user that –
"There are certainly difficulties, but blanket bans tend to create more problems. The focus should be on legal clarity, promoting safe spaces, and educating about responsible content consumption."
This is a great answer precisely because it avoids the point. It focuses on what we all agree on – clarity about the law, promoting safe spaces, and educating for responsible content consumption – and thus ignores the central human point. The original human writer, upon reading this answer, may decide that his statement was too emphatic and sweeping, and thus the entire discussion fizzles out and moves on to technical-legal lines.
Why did the AI do this? Because it had a hard time dealing with the argument directly. If it could deal with it well, it would. And when it can't? It raises points that aren't necessarily relevant, but often manage to distract us from the real issue.
Did I already say she does it like the best politicians?
Those of us who have argued with successful politicians or rhetoricians in the past are probably familiar with the overwhelming feeling of helplessness they instill in you when debating with them. They dodge like worms from points that they see as weak, and leap like snakes at your weak points. They are not bound by the truth. They will lie if they want to. They will omit facts that do not serve their argument. Have you presented a good, irrefutable argument? They will ignore it and move on. It is impossible to have a real debate with them, because they have no desire for one – unless it serves them.
Now imagine the internet is flooded with such commentators. Entities who are willing to lie and distort the debate just to convince you that sex comics are the pinnacle of human morality. And they don't give up and don't give up, they come in a variety of names and with a wealth of touching backstories. And their arguments are good. Convincing. Exciting. And how could they not be exciting? They know you, you and you. They have information about all of us, which allows them to shape their message in the most successful way. Just to convince you.
Do you now understand why Harris believed that 2024 would be the last year in which elections in the United States would be determined by free and unbiased human thought?
The great danger
Eight years ago, I warned about the danger of artificial intelligence capable of swaying public opinion, in my book "Those Who Control the Future." I am ashamed and proud to say that many politicians have probably read the book since then, and have acted to realize everything I warned about, through the proper operation of social networks.
When I raised these points, some asked what it would change in the overall picture. After all, the politicians on the right will use their artificial intelligences on us, and those on the left will use their competing artificial intelligences on us, and thus we will reach a renewed balance between the parties. So what will change?
Part of the answer can be found in the current study. When the AI received information about the human questioner, it could tailor a more convincing answer for him. Government officials, traditionally, have much broader access to information about citizens. Thus, "those who rule now" will also become "those who rule in the future," because they will be able to distribute the most successful AIs, which will be more convincing than those of their political opponents.
The second reason why this new state of affairs is so dangerous for democracy is that artificial intelligences primarily need information and computing power. In other words, resources that money can buy. If we are not careful, we will soon find that the people of capital are increasingly becoming the people of power, thanks to their ability to operate the most advanced and successful artificial intelligences in influencing public opinion.
That is, if it hasn't happened by now.
אז מה עושים?
What do we do? First of all, don't despair. Democracy has already dealt with innovative technologies for their time, such as newspapers, radio, and television, and ways have been found to limit their effects on public opinion. This is why there are restrictions on political advertising, for example, or the need to separate politicians from media outlets. We simply need to be aware of the new threat, and respond to it accordingly.
I hope that when this threat is better understood, ways will be found to deal with it. Laws will be passed that prohibit the use of bots to sway public opinion. Nonprofits will be established that detect such bots, and the regulator will be required to act against them. In schools, we will teach children the technological-social literacy they need to identify such bots, report them, and avoid talking to them. And the state will set up 'firewalls' that will prevent bots from foreign sources from influencing its citizens.
Are these perfect solutions? Of course not. But these will be first steps in the right direction. But will they be adopted at all? That already depends on our politicians today, and their desire to encourage free discussion, free from falsehoods, lies, and evasions.
good luck to us.
More of the topic in Hayadan: