OpenAI Warns of Ongoing Hacker Attempts to Exploit Its Services
Image Credits: Freepik

OpenAI Warns of Ongoing Hacker Attempts to Exploit Its Services

The OpenAI developer has stopped 20 attempts to use its tools for bad things, but so far there hasn’t been much of an effect.

OpenAI has stopped more than 20 attempts to use its models since the start of the year. However, it said that the efforts to use its tools to hack elections or make malware mostly failed, as did a targeted phishing attack against staff.

In a study, OpenAI said that threat actors were using ChatGPT to fix bugs in malware, write posts for fake social media accounts, and write articles spreading false information.

“Activities ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts,” OpenAI said. “They even included a hoax about the use of AI.”

More and more people are worried that AI could be used to spread false information during elections and that hackers could use AI-generated material to make their spam and malware campaigns better or faster. The US Department of Commerce asked AI companies last month to show that hackers can’t take advantage of their systems.

OpenAI has asked the industry as a whole to keep working together to fight back against these kinds of attempts, but it has also said many times that AI isn’t making things worse yet.

The study said that threat actors are still changing and experimenting with our models, but they haven’t shown that this has helped them make big steps forward in their ability to make new malware or spread it to a lot of people.

OpenAI said that some of the networks that were using its technology to make social media posts about elections in the US, Rwanda, India, and the EU were slowed down.

One of these was an Iranian operation that OpenAI had reported about before. It used ChatGPT to make longer-form articles and social media posts that were posted on websites that looked like news outlets. OpenAI said that the goal was to look more real or to get more followers, so the output was mostly political material as well as fashion and beauty posts.

For another example, ChatGPT accounts in Rwanda were making election-related posts for X.com, but OpenAI said that most of the posts that were marked as being made by its models didn’t get much attention.

In fact, the study went on to say that these networks did not attract viral engagement or build long-lasting audiences, even though OpenAI did stop some questionable activity.

Leave a Reply

Your email address will not be published. Required fields are marked *

Ben Horowitz’s Backing of Trump and Harris Unveils Silicon Valley AI Politics Previous post Ben Horowitz’s Backing of Trump and Harris Unveils Silicon Valley AI Politics