U.S. Officials Warn Russia Is Using AI to Ramp Up Election Interference
Image Credit: Getty Images

U.S. Officials Warn Russia Is Using AI to Ramp Up Election Interference

U.S. officials say that Russia is using AI to make material that divides people in order to affect the 2024 presidential election. They say that Russia is using fake photos, videos, and social media profiles to try to sway American voters.

United States intelligence officials said on Monday that Russia is the most active foreign actor using AI to make content aimed at the 2024 presidential race.

A representative from the Office of the Director of National Intelligence told reporters at a briefing that cutting-edge technology is making it easier for both Russia and Iran to quickly and more convincingly tailor content that often divides people in order to sway American voters. The representative spoke on the condition of anonymity.

Intelligence sources have said in the past that they saw AI being used in elections in other countries. “Our report today makes it clear that this is now taking place here,” the ODNI source said.

Officials say that Russian efforts to spread propaganda have used fake images, videos, audio, and text on the internet. That includes content made by AI “of and about famous people in the United States” and content that tries to stir up debate about things like immigration. Officials said that fits with the Kremlin’s larger plan to boost former President Donald Trump and make Vice President Kamala Harris look bad.

Russia’s Use of AI in Social Media Manipulation

During the 2018 World Cup in Moscow, RT, the Russian government-run TV station, aired from near Red Square. The Justice Department says that an RT worker was behind an AI-powered plan to make fake social media profiles of Americans in order to spread Russian lies in the United States.

The United States says a Russian bot farm used AI to pretend to be Americans. But Russia is also using ways that aren’t as high-tech. The ODNI official said that Russian spies set up a video in which a woman said she was hit and run by Harris in 2011 and that Harris then drove away. That never took place, there’s proof. Microsoft also said last week that Russia was behind the video, which was shared by a website that said it was a real San Francisco TV station but wasn’t.

The ODNI official also said that Russia is behind videos of Harris’s talks that have been changed. They could have been changed with AI or editing software. They were shared in a variety of ways, including on social media.

The official from the ODNI said, “When Russians create this media, one of the things they do is try to get it to spread.”

The source said that Harris’s videos had been changed in several ways to “paint her in a bad light both personally and in comparison to her opponent” and to keep the attention on issues that Russia sees as divisive.

Officials say that Iran has also used AI to make posts on social media and write fake stories for websites that look like real news sources. The intelligence community says that Iran wants to defeat Trump in the 2024 race.

Officials say that Iran is targeting Americans “across the political spectrum on polarising issues” like the war in Gaza and the presidential candidates. They have used AI to make this kind of content in both English and Spanish.

Growing Concerns Over AI-Powered Election Manipulation in 2024

The third biggest foreign threat to U.S. elections is China. Officials say that China is using AI in a wider range of operations to change how people around the world see China and to make contentious issues like drug use, immigration, and abortion more visible in the U.S.

But officials said they had not found any AI-powered operations that were trying to change the results of the U.S. election. There is a lot of talk in the intelligence community about how Beijing is trying to affect U.S. races other than the presidential election.

Officials, lawmakers, tech companies, and researchers in the U.S. are worried that AI-powered manipulation could mess up this year’s election campaign. For example, deepfake videos or audio could show candidates doing or saying things they didn’t, or they could mislead voters about how to vote.

As election day gets closer, those fears may still come true, but so far AI has been used more often in different ways: by foreign enemies to boost volume and productivity, and by political activists to make memes and jokes.

Monday, a representative from the ODNI said that foreign actors have been slow to deal with three main issues that keep AI-generated content from becoming a bigger threat to American elections: first, getting around the safety features that are built into many AI tools without being caught; second, making their own complex models; and third, planning how to target and spread AI content.

Leave a Reply

Your email address will not be published. Required fields are marked *

Elon Musk Invests in Supercomputers to Drive Tesla AI and xAI Growth Previous post Elon Musk Invests in Supercomputers to Drive Tesla AI and xAI Growth