Comprehensive coverage

Four ways artificial intelligence could be misused in the 2024 election

From deep fake to external intervention - artificial intelligence will facilitate and streamline operations that were carried out even before its existence

By Barbara Trish, Professor of Politics, Grinnell College 

US elections in the shadow of artificial intelligence. Illustration: depositphotos.com
US elections in the shadow of artificial intelligence. Illustration: depositphotos.com

The American public is wary of artificial intelligence and the 2024 election. 

A September 2024 survey by the Pew Research Center found that more than half of Americans are concerned that artificial intelligence, a technology that mimics the thought processes of humans, will be used to create misleading and fabricated information during the campaign. 

My academic research on artificial intelligence may assuage some concerns. Although this innovative technology can certainly influence voters or spread lies on a large scale, in most cases the uses of AI in the current election cycle are not new at all. 

I've identified four roles that AI is playing or could play in the 2024 campaign – all of them updated versions of traditional election activities. 

1. Information for voters 

The launch of ChatGPT in 2022 brought the promises and risks of AI into the public consciousness. This technology is "generative" because it generates responses to the text according to user requests: it can write poetry, answer historical questions - and provide information about the 2024 elections. 

Instead of Googling the information on how to vote, people may ask artificial intelligence that generates questions. For example, "How much has inflation changed since 2020?" or "Who is running for US Senate from Texas?" 

Some AI platforms, such as Google's Gemini chatbot, refuse to answer questions about candidates and voting. Some, like Facebook's AI tool Llama, answer - and answer accurately. 

AI can also generate wrong information. In extreme cases, AI can "imagine" or "hallucinate", offering completely wrong results. 

A CBS News report from June 2024 indicated that ChatGPT gave incorrect or incomplete responses to queries in some key states, and did not consistently refer to CanIVote.org, a respected site for voting information. 

As with the Internet, people need to verify AI search results. Please note: Google's Gemini automatically returns answers to Google searches at the top of the results page. You may run into an AI tool when you thought you were searching the web. 

2. Deep pike 

Deep fakes are images or audio and video files produced by creative artificial intelligence designed to reproduce reality. In fact, these are particularly convincing versions of what are now called "cheap fakes" - photos edited with basic tools such as Photoshop and video editing software. 

The potential for deep fakes to defraud voters became clear when a fake AI telemarketer impersonated Joe Biden ahead of the January 2024 New Hampshire primary and asked Democrats to save their votes for November. 

The Federal Communications Commission subsequently ruled that telephone messages produced by artificial intelligence are subject to the same regulations as all telephone messages. They cannot be autodialed or sent to cell phones or landlines without prior consent. 

The agency also fined the consultant who created the fake Biden call $6 million - not for voter fraud but for providing incorrect information.

While synthetic media can be used to spread disinformation, deepfakes are now part of the creative toolbox of political advertisers. 

One of the earliest uses of a deepfake aimed at persuasion rather than outright deception was an AI ad in the 2022 mayoral race in Shreveport, Louisiana, that portrayed the incumbent mayor as a student called to the principal's office. 

The ad included a brief statement that it was a deep fake, a warning that was not required by the federal government but was easy to miss. 

Wired magazine's Election AI Project, which tracked the uses of AI in the 2024 campaign, suggests that deepfakes did not flood the ads seen by voters. But they were used by candidates from all over the political map and our political education for many purposes - including fraud. 

Former President Donald Trump hints at a Democratic deepfake when he questions the size of the crowd at Vice President Kamala Harris' events. By making such claims, Trump is trying to earn the "liar's dividend" - the opportunity to plant the idea that the real content is fake. 

Signing a political opponent this way is not new. Trump has been claiming that the truth is essentially "fake news" since at least the "Obama Birthplace" conspiracy of 2008, in which he helped spread rumors that presidential candidate Barack Obama's birth certificate was fake. 

3. Strategic distraction 

Some worry that AI could be used by election deniers this round to distract election administrators by flooding them with bogus public records requests. 

For example, the group True the Vote has filed hundreds of thousands of voter appeals over the past decade using only volunteers and a web-based application. Imagine their ability if they were equipped with AI to automate their work? 

Such rapid and widespread appeals of voter lists can divert election administrators from other critical tasks and thus disenfranchise legitimate voters and interfere with elections. 

Currently, there is no evidence of this happening. 

4. External interference in elections 

Russian meddling already revealed in the 2016 election underscored that the threat of foreign interference in American politics, whether by Russia or another country seeking to weaken Western democracy, remains a pressing concern. 

In July 2024, the US Department of Justice seized two domain names and investigated nearly 1,000 accounts used by Russians for what it described as "social media bot farms," ​​similar to the one Russia used to influence the opinions of hundreds of millions of Facebook users in the 2020 campaign. Artificial intelligence can give things These are a real boost. 

There is evidence that China is using artificial intelligence in the current election to spread malicious information about the United States. One of the social media posts incorrectly transcribed a speech by Biden to imply that he addressed sexual matters.

Artificial intelligence may help election meddlers do their dirty work, but new technology is not necessary for foreign interference in American politics.

In 1940, the United Kingdom—an ally of the United States—was so focused on trying to get the United States into World War II that British intelligence officers worked to help congressional candidates who supported intervention and tarnish the image of the separatists.

One of the targets was the reclusive Republican representative, Hamilton Fish. The British circulated a photo of Fish with the leader of an American pro-Nazi group in the wrong context, trying to paint him as a supporter of Nazi elements in the US and abroad.

Can artificial intelligence be controlled? 

While recognizing that new technology is not needed to cause harm is important, bad actors can exploit the efficiencies built into artificial intelligence to pose a serious challenge to the operation and integrity of elections.

Federal efforts to regulate the use of artificial intelligence in politics face the same obstacles as most proposals to regulate political campaigns. The countries were more active: 19 of them ban or limit deepfakes in political campaigns.

Some platforms engage in easy self-regulation. Google's Gemini, for example, responds to questions about basic information about elections by saying: "Currently I cannot help with answers about elections and political figures."

Campaign professionals may also use self-regulation. At the May 2024 Campaign Technology Conference, several speakers expressed concern about voter backlash if they found out a campaign was using artificial intelligence technology. In this sense, the public concern about artificial intelligence may be helpful, as a certain protective framework.

But the other side of that public concern—what Stanford University's Nate Presilli calls "artificial intelligence panic"—could further erode trust in elections.

More of the topic in Hayadan:

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.