Artificial intelligence in the service of influence operations: how OpenAI and the cyber people were able to uncover and prevent a secret Iranian operation that tried to shape American public opinion
OpenAI, the company behind the famous ChatGPT, is known for being an artificial intelligence company. Not security, not cyber, not selling shoes. Artificial intelligence. Nevertheless, two weeks ago OpenAI announced that it had stopped a "secret influence operation" by the Iranians, who were trying to influence the elections through ChatGPT.
How did OpenAI realize that such an operation was taking place at all? Through artificial intelligence, of course.
But before I tell you about this sale, I need to tell you about another sale.
The president the videos love to hate
Tsai Ying-wen was one of the most important leaders in the world until the beginning of 2024. She was the first woman elected president of Taiwan in 2016, and ruled the country as a representative of the Democratic Progressive Party for eight years. During this period, she managed to strengthen the economy and Taiwan's position as the most advanced chip manufacturer in the world. At the same time, it was able to deal well with all the threats and demands of China, which still believes that Taiwan is just a 'wayward daughter' who separated from the superpower and needs to be reunited with it. Under Chinese rule, of course.
In 2024, the time came when the people of Taiwan had to elect a president again. Ying-wen could not run again, after eight years in office, but her party still led the race. Apparently the victory was already in the pockets of the progressive-democrats.
Then the videos started appearing.
In the days leading up to the 2024 election, thousands of videos and comments appeared on YouTube, promoting and marketing an alternative history: a secret, fake document that portrayed Ying-wen in a particularly negative light. Most of the videos were made with the help of artificial intelligence. Realistic-looking avatars stood in front of the camera and spoke in the style of news anchors revealing the next big scoop. Some also broke into a song that urged viewers not to vote for Ying-wen's party. In at least one case, the speaker was Santa Claus. Because why not?
The videos were just the opening shot for the progressive-Democrats' online assassination operation. At the same time, thousands of comments were posted to the profiles of real users in Taiwan, with links to the defamation videos and the fabricated document against the president. More and more comments, videos and links Pop up on a variety of platforms: YouTube, Twitter (or X), Reddit, Instagram, Facebook, Medium and other social networks.
A few days after the start of the operation, the general elections were held in Taiwan. When the smoke cleared, it became clear that the smear operation was only partially successful. The Progressive Democrats still won the election, though 16 percent of the votes went to other candidates, compared to the 2016 and 2020 elections.
The main culprit
It is very difficult to know for sure who was behind the influence operation in Taiwan. According to Google's estimates, this is a group called DRAGONBRIDGE, which has been engaged in this type of shenanigans for many years. Lately, the group has upgraded and started using creative artificial intelligence to produce textual content, images and even avatars that speak like humans.
The group is deeply linked to the Chinese Communist Party, and is sometimes described as an arm of the government. It promotes political messages that support China, of course, and tries to sow rumors and incite the people of the West against each other and against democracy. In the last year, the group also tries to present the American student demonstrations against Israel as evidence of American "hypocrisy". "Freedom of speech?" The artificial narrator asks dramatically, "Only until the students present a position that does not support the government."
Just to clarify: Dragonbridge - or the operators behind it - do not really care about Israel or the Palestinian situation. They just want to agitate, and use every opportunity for the good of the cause.
And you can't blame them for not trying.
In 2023 alone, YouTube blocked 57,000 of the group's YouTube channels, and more than nine hundred thousand videos. If we assume that the production of each video requires at least several man-hours of work, this means that more than two-million man-hours were invested in this entire effort. It is clear that Dragonbridge is an organization with hundreds or thousands of employees or volunteers, and probably also very generous funding. Someone - probably in China - believes very much in the group and its abilities.
It's unpleasant to say, but probably unfairly.
Google functions as the gatekeeper of a large part of the Internet, and especially the social networks it owns such as YouTube and Blogger. The company thwarts Dragonbridge's activities on a daily basis, and reports monthly on the incessant war on the network. The writers of Google's reports try to sound matter-of-fact, but it's hard not to catch a tone of mocking Dragonbridge from the lines.
"Although Dragonbridge produces a large amount of content, it still does not receive much attention (engagement) from YouTube or blogger users." The report concludes. “…Dragonbridge gets almost no organic engagement from real viewers.”
True, there are also videos made by Dragonbridge that manage to reach thousands of viewers. But even in these cases, Google claims that these are not real viewers.
“In the cases where Dragonbridge content received attention, it was almost entirely inauthentic, coming from other Dragonbridge accounts rather than real users. The responses were also mostly from other Dragonbridge accounts.”
So what happened in the Taiwan election? As far as can be understood, Dragonbridge didn't really make an impact there either. The media in Taiwan did report the existence of the videos, but added that due to the "low production quality" and "the use of terms in the Chinese language that are not widely used in Taiwanese", the content was not widely distributed.
Despite these figures, Dragonbridge shows no sign that it intends to cease operations. The reason is clear. If Google is right and it is indeed a group acting on behalf of China, then it is actually an arm of the government and the military. It has a clear mission: to carry out secret influence operations all over the world.
Just like everyone else does.
Secret influence operations
One of the most open secrets in the world is that every self-respecting country tries to shape world public opinion through secret influence operations. The Chinese do it with a 500 kilo sledgehammer, hitting the top of social media with 900,000 videos a year. Apparently, of course. The Russians are not far behind the Chinese, and very recently an influence operation was revealed on their behalf, in which they used artificial intelligence to create Almost a thousand fake accounts On Twitter (X) only.
Oh, and we're there too. According to the New York Times, Minister Shikli allocated two million dollars for a secret influence operation against our greatest enemy: the United States. As part of the operation, hundreds of fake accounts pretending to be real Americans were opened on social networks, and posted pro-Israeli comments. The 'respondents' mainly focused on American legislators and called on them to continue financially supporting the IDF.
And of course they used ChatGPT to write many of the comments.
It can rightly be argued that secret influence operations of this type are an obscene matter, but as I said before: every self-respecting country carries out such operations. Even the great and noble United States has established China Social Media Insider Influencer Team, and tried to incite the population against the ruling party.
But are these promotions successful?
The three-body problem
The book "The Problem of the Three Bodies" is a Chinese book.
When I say it is "Chinese", I don't just mean that it was written in China or in the Chinese language. Each page of the book reflects Chinese culture, the Chinese way of thinking and the way Chinese citizens interpret reality. The book was translated into English by a government company (China Educational Publications Import and Export Corporation) as a way (implied) to spread Chinese culture around the world.
The result? As mentioned, a Chinese book, but in English. From the very first page I read, it was clear to me that the original author comes from a very different culture than the American one. The translator did a good job in adapting the world of terms, but the feeling is of "foreignness". of writing that reflects a world of values and behavior very different from those practiced in the Western world.
All this does not mean that "The Three Body Problem" is not a good book. Simply that he is Chinese. And even if it were rewritten in Braille, any blind person who would run his fingers over it would immediately announce aloud - "It's Chinese".
Human cultures, it turns out, are very different from each other. When a Chinese tries to write in English, it is very easy to recognize that it is Chinese. And so is when an American tries to write in Chinese, or when an Israeli tries to write in 'American'. Some argue that even cross-gender writing is no small thing, and that men find it very difficult to write from the point of view of female characters Without the writing looking ridiculous to the readers.
This is probably why most of the secret influence operations we know today have failed miserably.
The seemingly Chinese Dragonbridge, as I already wrote, tried to influence the Taiwanese without adapting the language and terms to the target audience. When the Russians and Iranians try to influence Israel, you can almost immediately recognize the fact that someone is trying to write in a language that is not their own. We don't have data on the success of the Americans' operation in China, but if I had to bet on the outcome, I'd put good money on them being just as successful as Dragonbridge were in shaping public opinion in their favor. That is, an almost complete failure. It is difficult, difficult to convey messages in another language and culture.
And I don't even relate to Shikli's experiments, in which accounts of black Americans published comments such as "a middle-aged Jewish woman", or 118 comments that repeated the exact same sentence: "I gotta reevaluate my opinions due to this new information". It's just amateurish. prima facie.
And after all this long introduction, we finally come to the current story, in which the Iranians tried to carry out a secret influence operation in the United States. And for a change, he might actually have had a chance to succeed. why? Because they realized that they could use the technology that is currently changing the game in every field: artificial intelligence.
Artificial intelligence for the benefit of influence operations
OpenAI's cybersecurity personnel identified in mid-August 2024 a backlog of Iranian accounts making suspicious use of ChatGPT. At a closer look it was clear that they were using the artificial intelligence to create content for a secret influence operation, with the aim of influencing the elections in the United States.
The Iranians used AI in two ways. On the one hand, they produced articles about domestic politics in the United States, which were published on five progressive and conservative websites. On the other hand, they produced a series of short responses to social media in English and Spanish. These comments were posted on a dozen accounts on Twitter (X) and Instagram. In both cases, the Iranians used ChatGPT to write the articles and comments for them, or to pre-write content in English and Spanish.
The content itself A wide range rule: From the Israel-Gaza war, to Israel's involvement in the Olympics and ending with the United States presidential elections. Here and there, comments on politics in Venezuela, minority rights in the United States and Scotland's right to independence were also incorporated. They desserted with comments about fashion and beauty - probably to attract more followers.
And it didn't work for them either.
Most of the Iranians' posts on social media did not resonate: they were not shared, they did not receive likes, and no one responded to them. The articles were also not shared on social networks. Another secret influence operation Nahal Kishlon hardworking.
Which raises the question - why?
It is difficult to answer the question without carefully examining the material that the Iranians shared. What we can say at the moment based on other reports on the subject - for example, from Microsoft from August 2024 - is that in recent months there have been attempts by other countries to use artificial intelligence to lift influence operations. The Russians tried to use artificial intelligence to hack the Paris Olympics, without success. Chinese agents have attempted to use artificial artificial intelligence, with very limited effect. And as Microsoft summarizes in the report -
"In conclusion, we have seen almost all players try to incorporate artificial intelligence content into their promotions, but recently many players have returned to techniques that have proven effective in the past: simple digital manipulation, distortion of content and attaching reliable tags or logos to false information."
And so the Iranians also followed in the footsteps of great and better than them - Americans, Chinese, Russians and Israelis - and failed to use creative artificial intelligence.
But never resilience.
The road to maturity
There are very few tools that humans have been able to use optimally for the first time. Artificial intelligence is no exception in this respect. True, the Iranians and all the rest have not yet succeeded in influencing foreign countries through it, but I have no doubt that with a little more work, effort and bad will - they will achieve success in this area as well.
Beyond that, at the moment it is possible to characterize the countries' secret influence operations as "live and pray". Each comment, each video, each piece of text has a very small price. This is why the so-called Chinese do not hesitate to flood the web with false information. True, almost no piece of information will succeed in spreading further, but it takes one video or one claim to become viral, to change the public opinion of an entire country.
Even in the most extreme case, where the various groups fail to produce convincing content using artificial intelligence, they will be able to find other uses for it. We know this from a previous report by OpenAI, from the beginning of 2024, in which five groups linked to countries and carrying out cyber hacks were identified. Two linked to China, and one to Iran, North Korea and Russia.
The groups used ChatGPT To write code and troubleshoot existing code, perform technical research, produce content for phishing purposes, and even for help with research on satellite communication protocols and remote imaging technologies. In short, if you've always wondered what terrorists could do with ChatGPT, you can stop wondering: they're already doing it.
And now only one question remains: what should we do?
Where do we go from here?
About a year ago, a friend contacted me with great fear. She read an opinion column according to which any pedophile can now duplicate any child's face based on their photos from social media, turn it into a zoom avatar, and thus talk to children and trash them.
"What to do?!" She asked with concern, "Should I tell my kids to download all their pictures from the internet? From WhatsApp? at any rate? Download my photos as well?”
I reassured her with a smile that technology cannot allow us to do such a thing yet, and that at least for the moment - she can relax. A year has passed, and here is a new invention that makes it possible to do live deep fake: That is, to change your face during a Zoom call, so that it looks like another person's face. And this is based on only one photo of that person.
I informed my friend about the new development, and I was bombarded with the same questions again. what to do Are our children safe? Are any of us safe?
In short, no.
When artificial intelligence can do the work of a skilled content creator in any language, in any culture, for any criminal - no, none of us is safe from its influence.
When artificial intelligence can produce videos that look believable at first glance - none of us are safe.
When artificial intelligence can pick one picture of a child, girl, woman or grandfather, and turn that picture into an entire adult-only video - none of us are safe.
But that doesn't mean all is lost. We simply need to make sure to adopt more careful and critical ways of thinking. We need to be much more careful whenever we are asked for information online. We need to learn not to believe information easily, and certainly not to pass it on, unless it has been verified by several different independent and reliable sources. And we need to understand that everyone - even from the side that identifies itself as 'ours' - is trying to manipulate us, and it will be increasingly easy for them to do so. Therefore we must be careful, be careful - and educate our children to be doubly careful.
At the same time, we need to make sure our lawmakers are prepared for that future. Google, OpenAI, Microsoft and the other giant companies are doing a nice job of tracking down cyber and influence groups and stopping them. The governments also activate formations and forces to deal with the phenomenon, properly. But the more powerful these groups become thanks to artificial intelligence, the more resources and attention the state will need to invest.
Will we rise to the challenge? It depends on all of us. We need to show shared social responsibility, not share information that is feared to be false, teach our children the ropes of existence in the new digital world, and demand that our public representatives protect us.
This is not the end of the world, but it is definitely the end of a previous and simpler world, in which the phrase "I don't believe anything I hear, and only half of what I see" is no longer cautious enough. It should be replaced with "I don't believe anything without thinking."
And think about it.
More of the topic in Hayadan:
- Elon Musk, Steve Wozniak and many others in an open letter: Strong artificial intelligence is dangerous for democracy
- Nvidia helped train ChatGPT using 30 GPUs
- are you a robot A linguistic anthropologist explains how humans are like ChatGPT - they both recycle language
- Sam Altman, CEO of OpenAI - the developer of ChatGPT has signed a new open letter warning against the dangers of artificial intelligence
- What you can learn from ChatGPT about being human