Elon Musk’s AI-Generated video mimicking Kamala Harris raises major political alarm
Elon Musk shared a video of Kamala Harris that was adjusted by AI without any other information. This makes people worried about how AI could be misused.
A video that was edited to make it sound like Vice President Kamala Harris is saying things that she didn’t say is making people worry about how AI can be used to trick people with three months to go until Election Day.
After tech billionaire Elon Musk posted the video on his social media site X on Friday night, without saying that it was a spoof, it got a lot of attention.
Numerous of the images in the video are from a real ad that Harris, the likely Democratic presidential candidate, released last week to start her campaign. Instead of Harris’s voice-over, the movie uses a different voice that sounds a lot like Harris.
As the US presidential election gets closer, lifelike AI-generated images, videos, and audio clips have been used to make fun of or mislead people about politics. It shows that even though high-quality AI tools have become much easier to get, the federal government hasn’t done much to control their use yet. Instead, states and social media platforms have mostly set the rules for AI in politics.
What should be done about content that blurs the lines of what is considered acceptable use of AI? The video also brings up questions about how to best handle this type of content, especially satire. The original poster of the video, a YouTuber named Mr. Reagan, has said on both YouTube and X that the video that was changed is a spoof. The platform says Musk’s post has been seen more than 123 million times, but it only has the words “This is amazing” and a smiling emoji.
Calls for Stricter AI Regulations
Two professionals who study AI-generated media listened to the fake ad’s audio and agreed that a lot of it was made by AI. Analytical criminologist Hany Farid from the University of California, Berkeley, said the movie shows how powerful generative AI and deepfakes can be.
According to Farid, companies that make voice-cloning and other AI tools for people should do more to keep their services from being misused in ways that harm people or the democratic process. Rob Weissman, co-president of the public interest group Public Citizen, didn’t agree with Farid. He said that the movie would fool a lot of people.
The group that Weissman works with has been pushing for Congress, federal agencies, and states to control generative AI. The video shows the same kind of thing that they have been warning about. Other generative AI deepfakes in the US and other places would have used fake news, humour, or both to try to sway votes.
Legislation on AI in politics has not yet been passed by Congress, and federal agencies have only taken a few steps. This means that most of the current U.S. regulation is up to the states. To protect campaigns and elections, more than one-third of states have made their own rules, according to the National Conference of State Legislatures.