Iran Targeting Trump with AI Deepfakes? Cyber Tensions Rise Ahead of 2024 Election
AI deepfakes and traditional hacks, supposedly by Iran, are having an effect on the 2024 election. Donald Trump and Kamala Harris are said to be the targets of hacking and spreading false information.
Artificial intelligence would have an effect on this year’s election race. On Sunday, Donald Trump, said that Kamala Harris, the Democratic candidate, had used AI tools to lie about how many people were at her events. The former president’s social media post was debunked by news sites, such as the local Fox TV affiliate that live-streamed a big event at the Detroit airport.
The year 2024 isn’t going the way experts thought it would. For example, candidates are blaming opponents of changing videos when this can be easily shown to be false. Also, it has been discovered that fake political news made with AI is only partially effective. Campaigns using AI to sway elections were supposed to be better and less obvious than what has happened so far in elections from Indonesia to the US.
Iranian Hackers and the Use of Traditional Cyber Tactics
Some people are still messing around online. Despite this, it looks like they are depending more on old-fashioned strategies than on AI. Iranian hackers may have taken information from the Trump team in the most recent case. The false claim that Harris went to a meeting this week and its spread on social media show what some cybersecurity experts have been saying for a long time: Even though AI deepfakes are being used more, the best way to stop bad cyber impact in elections is to make it harder for people to get them.
When tech billionaire Elon Musk shared a video on his social media site two weeks ago, it got a lot of attention because it used an AI voice-cloning tool to make Vice President Harris’s voice sound like she was saying things that she hasn’t really said. He later said that he thought people knew it was a joke. Using AI to write stories can make the process much easier and faster, but that doesn’t mean it will have the effect that was intended.
In April, the Microsoft Threat Analysis Centre (MTAC) released a report that said Storm-1679, an operation with Russian ties, used generative AI over and over to try to damage the Paris Olympics but failed.
OpenAI came to the same result in a May report. It discovered that influencers with ties to Russia, China, and Iran had not “meaningfully increased audience engagement or reach” even though they used its tools to do things like write articles in different languages, come up with names and bios for social media accounts, and fix bugs in computer code.
Recently, Mr. Trump said that Iranian hackers may have been using a common type of cyberattack called “spear-phishing” to steal internal campaign papers. Apparently, he was talking about Friday’s MTAC story, which named an Islamic Revolutionary Guard Corps unit that recently emailed a top official of a presidential campaign using the hacked account of a former political adviser.
The report says that the email had a fake forwarding address and a link to a site that the unit controls. In July, an anonymous source sent internal Trump campaign documents to the political news website Politico. These documents included a 271-page dossier with public information on GOP vice presidential candidate Ohio Sen. JD Vance.
Now, the FBI is looking into both the GOP attack and a claimed attack by Iran on the Democratic presidential campaign.
Global Impact of AI Deepfakes in Politics
When a video surfaced in India this spring wrongly showing the home secretary saying that the government would end a jobs program for people from disadvantaged castes, police arrested people from two opposition parties.
Deepfakes don’t always do bad things. In Pakistan, followers of an opposition party used deepfakes of their jailed leader to get people to vote for them, and the voters gave the party a majority. It was a historic result, even though the military-backed party ended up forming a government with a rival party.
In the US, too, the reasons have been all over the place. Before the New Hampshire primary in January, a political expert paid for a deep fake of President Joe Biden that was meant to make people not want to go to the polls. Democrats said the expert did it to warn his party about the risks of AI. Still, the Federal Communications Commission wants to fine him $6 million, and New Hampshire has charged him with 32 counts of interfering with an election.
Last month, a deepfake seemed to show Mr. Biden swearing at viewers during his speech on TV ending his bid for reelection. People all over the social media site X shared the film a lot. One reason for spreading this kind of information is to hurt the image of a candidate. Another, less obvious goal is to further split people who are already divided.