OpenAI Halts Election Manipulation Scheme Powered by ChatGPT
Image credits: Creator: NurPhoto | Credit: NurPhoto via Getty Images

OpenAI Halts Election Manipulation Scheme Powered by ChatGPT

OpenAI shut down some ChatGPT accounts that were connected to an Iranian group that used AI to spread false information about the U.S. presidential election, but these accounts didn’t have a lot of followers.

An OpenAI blog post from Friday reports the banning of a group of ChatGPT accounts associated with an Iranian operation producing content about the U.S. presidential election.

The business claims that the operation utilized AI to write articles and social media posts, yet it appears their reach was limited.

OpenAI has previously banned accounts associated with government officials who misused ChatGPT. In May, the company used ChatGPT to hack five campaigns to change people’s minds.

These events remind me of times when government officials used social media sites such as Facebook and Twitter to try to change the results of elections in the past. Similar groups or maybe the same ones are now using generative AI to spread false information all over social media.

Similar to social media companies, OpenAI appears to be proactively blocking accounts associated with these activities as they emerge.

OpenAI claims that a Microsoft Threat Intelligence report released last week aided its investigation into this group of accounts.

The report said that the group, which it calls Storm-2035, is part of a larger campaign to change U.S. elections that has been going on since 2020.

According to Microsoft, Storm-2035 is an Iranian network with many fake news websites that “actively engage US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the conflict between Israel and Hamas.”

Other situations have demonstrated that the playbook does not consistently endorse one policy over another. Instead, its purpose is to incite disagreement and conflict.

OpenAI found five websites that Storm-2035 used to hide itself. These websites had convincing domain names like “evenpolitics.com” and pretended to be both liberal and conservative news outlets.

The group used ChatGPT to write several long pieces, one of which said that “X censors Trump’s tweets,” which Elon Musk’s platform has not done in fact, Musk is encouraging former President Donald Trump to use X more.

OpenAI Identifies Social Media Manipulation

OpenAI found a dozen X accounts and one Instagram account that were being used by this operation on social media. The company claims that it used ChatGPT to rewrite political comments shared on these sites.

One of these tweets said, wrongly and confusingly, that Kamala Harris says climate change is to blame for “increased immigration costs.” It then said, “#DumpKamala.”

OpenAI claims to have found no evidence of widespread sharing of Storm-2035’s articles. Most of Storm 2035’s social media posts received few to no likes, shares, or comments, according to reports.

It’s simple and cheap to set up these kinds of businesses with AI tools like ChatGPT, so this is often the case. As the election gets closer and partisan fighting online gets worse, you can expect to see a lot more notices like this.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Bill Gates Tackles AI, Social Media and Global Challenges in Morgan Neville's Netflix Series Previous post Bill Gates Tackles AI, Social Media and Global Challenges in Morgan Neville’s Netflix Series
Parents Raise Concerns Over South Korea's New AI Textbook Initiative Next post Parents Raise Concerns Over South Korea’s New AI Textbook Initiative