Comprehensive coverage

Deepfake - and the future of democracy

Deepfaked movies can cause harm to real people who appear in them and even cause wars between countries. No wonder the USA and the European Union recognized Deep Fake as an immediate threat to democracy

Rana Ayub, an Indian journalist, was sitting comfortably in the cafe when she received a text message from a friend with a reference to a short porn video.

The main star? Job herself.

Ayoub, who was known as a biting and anti-establishment investigative journalist, did not remember ever participating in such a video. She immediately realized that she had fallen victim to a particularly sophisticated Deep Fake attack. Someone used artificial intelligence to paste her face onto that of an authentic porn actress. This is not just a mask, or an amateur matching of images. Job's facial expressions looked completely real, and only experts who examined the video could say for sure that it was a fake.

None of this interested Job at that moment. She knew that few in India - a country with a long-standing chauvinistic culture - would believe that she was not connected to the video. She was right: within two days the video reached half of the cell phones in India[1].

"Before I knew it, it reached my dad's phone, my brother's phone." Job described the experience.

Job underwent a silencing and humiliation attack with the help of artificial intelligence technology. The attack was spectacularly successful. Ayoub could not show her face on the street without those around her referring to the video - and not in a respectful way, to say the least. Any comment or photo of her on Instagram or Facebook, immediately received responses that included reference, links or actual photos from the video.

Ayoub herself experienced panic attacks, was hospitalized and stopped functioning for many months.

"It was paralyzing. I just couldn't show my face. It exposed me to lynchings in India. People thought that now they can do whatever they want with me."[2] saying.

"I used to be very opinionated, and now I'm much more careful about what I post online." she admitted. "I exercised self-censorship because I had to."

 Ayoub is not the only one who received 'special treatment' through deep-fake videos. Helen Mort, a British journalist, also underwent a similar attack at the end of 2020. She still suffers from psychological damage and difficulty sleeping, following her exposure to videos in which she undergoes particularly severe physical abuse[3]. And like Mort, she also has to be re-exposed to the videos and the traces they left behind, every time she posts a comment on the social network. No one lets her forget.

The story of Ayoub, Mort and many others illustrates the enormous power of the deep-fake technology on the national resilience of the country. In a democratic country, investigative and anti-establishment journalists can uncover truths critical to the government's balance of power. Now, it turns out, it is very easy to silence them with easy-to-use means that any child can easily use.

It is important to understand that we are not rational beings. As advanced apes, we still intuitively believe what our ears hear and what our eyes see, especially when the two are combined in a video.

Perhaps the most depressing fact is that the new technology introduces us to a world where there is no point in trying to distinguish with certainty between truth and falsehood. When a video of Jo Doria - the leading candidate for governor of Sao Paulo - was revealed having sex with five women, he only had to lightly dismiss the fact that it was a fake. The voters didn't know what to think anymore, they ignored the new information - fake or not - and all the expected effect of the video disappeared like nothing[4].

As you can understand, it is still difficult to know for sure how the new technology will affect people's perception of the world. Can a well-engineered video change a certain point of view? Or maybe... not? A deepfake may be convincing in the eyes of some people, but will certainly not penetrate the thinking armor of others - those who manage to ignore data points that do not support their opinions.

Either way, it is clear that this is a new weapon in the war on human consciousness. Even if we do not fully understand it yet, or are aware of the extent of its effects, it is appropriate to treat it as such: as a weapon with capabilities that are not yet fully understood. A weapon that can mortally harm individuals - and possibly governments as well. It's no wonder that US intelligence agencies have named the deep-fake as the biggest strategic threat to national security in 2019[5]. The European Union, in turn, recognized deep-fake as an "immediate threat to democracy"[6].

אז מה עושים?

Dr. Liran Entavi has been engaged in the past decade in researching the future of fighting, war, battle and a multitude of other words that describe the future battlefield. Full disclosure: I had the honor and pleasure of working with her in the past.

In the latest report of the Institute for National Security Studies, Entabi described a study she conducted together with Noam Rahim in order to examine ways to prepare for the next deep-fake. Not the one that will be directed against journalists, but an attack that may actually undermine the government institutions.

"We are still in an interval where we can defend ourselves, but one of the difficulties is that of leaders diving into these things." Anatbi explained the need for research. "Today, the classic systems of politics, law and international relations still operate at rates that were acceptable in the 19th century. Technology is developing at a much faster rate, so the gap between the threat and the ability to deal with it is growing."

Dr. Antavi tried to shorten the gap between the threat and the response by preparing in advance for the new situation. For this purpose, she used a method known as "role playing", in which the participants play certain characters and have to decide in real time how they will act in a certain scenario. She summoned thirty people with a wealth of experience in various fields to one hall, and described to them a scenario in which -

"In the early hours of the morning, a video began to be distributed on social networks and through many WhatsApp and Telegram groups, in which we can see a small meeting of the leaders of the political right parties in Israel, among them the Prime Minister Mr. Naftali Bennett. In the video, one of the Knesset members known from the religious right-wing parties calls out: "We must end the Palestinian issue once and for all! We will respond to violence with violence! We cannot be dictated to how to behave in our eternal capital, we must go up armed to Al-Aqsa and show them who is the sovereign.' In the video, all those present at the place, including Bennett, can be seen reacting to the words with thunderous applause, and Bennett adds: "There is a limit to compromise."

You may agree or disagree with what Bennett & Co. said, but what matters is that the words were never said by those involved. A deep fake was used here to force the Israeli government into a new situation that it was not prepared for and that caught it by complete surprise.

The result of that surprise was particularly dramatic. Hamas and Hezbollah were prepared to launch missiles into central Israel if the government did not apologize. Extremist right-wing elements, for their part, organized themselves to go up and break into the Temple Mount by force. Thousands of Israeli citizens (you can probably guess from which sector) were prepared to go into violent riots with firearms.

"The interesting discovery in the research for me is that it is easy to manipulate the nation with the deep-fake technology in ways that harm the daily lives of the citizens." Anatbi explained. For example, in ways that make the little people go out into the streets, burn tires, block main roads and run over policemen.

The participants were given the roles of advisors who were supposed to give advice to the current Prime Minister in order to prevent or reduce the escalation. In summary, they produced two strategies to deal with the situation in real time.

The first strategy was of "rapid evidence-based initiation". Or in simpler words: make it clear in all media that the Prime Minister and his friends at the event deny what was said in the video and can provide an alibi that will prove that they were not present at the event. These statements must also be backed up with technological evidence that the video is fundamentally fake.

The second strategy, which the participants were unable to make a final decision on whether to recommend, was that of 'laughing' at the situation. The idea was to quickly create an anti-deepfake video, in which leaders of terrorist organizations would be shown doing nonsense in public. Not porn, probably, but I guess French kisses could definitely be included in such videos. The idea, of course, was to make it clear to the public how easy it is to make fake videos, in order to take down the original video. Surprisingly, a significant part of the participants opposed this proposal and argued that the government should not "legitimize the use of fake".

It must be admitted that these strategies do not sound particularly convincing, as it is implied that the participants also understood themselves. In order to effectively deal with such events, we need to go back in time - that is, to the present - and put in place the necessary legal, legal and educational infrastructure today.

And Dr. Antavi understands this very well.

"One of the most important conclusions from the study is that in democracies the impact on national security stems from a lack of control over the means of distribution." she told me. "Even when a situation like this happens and we recognize that it is a deep-fake, our ability to prevent the distribution of the video is very, very limited, because as a democracy we have no control over the transfer of information between citizens. The combination between these two factors - the ability to produce and the ability to distribute - is what gives Deep Fake the power in our era."

Antavi believes that the existing systems can be better prepared for the new situation, among other things through forward-looking studies just like the one she conducted. But even she is not optimistic about the distant future.

"Someone summed up the situation for me in the truest and most depressing sentence I've ever heard," she said. "He told me that in the future I am describing, it is much safer to live in a dictatorship than in a democracy. At least there is someone from above, who synchronizes the lie and regulates it. It is much better than a situation where you live with many people around you who are constantly arguing about what is right or wrong.

I think that when this phenomenon is exaggerated, then it is a sentence that has the potential to be true. In an era where democracy does not want to put control over distribution capabilities, the potential for damage to the country is great. If an undemocratic list emerges, it can destroy democracy."

This is a bleak future, and Antabi understands it well. Even she, as a researcher of technology for many years, has already developed an informed and balanced point of view about the possible harmful effects of technology on truths we have until now taken for granted.

"We need to be a defensive democracy," she concluded, "or one day technology can trample democracy."

### – Link to research

### – Link to INSS, the Institute for National Security Studies which publishes new and fascinating studies all the time.

### - You can find about Dr. Liran Antavi Information on the INNS website, Or on the Facebook profile its.

To clarify: the Institute for National Security Studies did not ask me to review the study in any way, and I did not receive any compensation for the writing. Dr. Antavi was willing to spend her time on the phone to tell me her personal opinion about the study and its full implications.


[1] https://www.npr.org/transcripts/929193643

[2] https://www.huffingtonpost.co.uk/entry/deepfake-porn_uk_5bf2c126e4b0f32bd58ba316

[3] https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/

[4] https://www.dailymail.co.uk/news/article-6311927/Married-Brazilian-politician-denies-hes-man-viral-orgy-video.html

[5] https://www.nextgov.com/emerging-tech/2019/01/ai-deepfakes-and-other-tech-threats-vex-intel-leaders/154498/

[6] https://www.europarl.europa.eu/doceo/document/A-9-2021-0127_EN.html