Microsoft Sues 10 Unknown Hackers for Misusing Azure OpenAI Service

Microsoft Sues 10 Unknown Hackers for Misusing Azure OpenAI Service

Microsoft is suing 10 unknown people for misusing the Azure OpenAI Service. They claim these individuals stole data and software to dodge security measures and make harmful content.

Microsoft has announced that it is suing 10 people whose names are unknown for misusing its Azure OpenAI Service. In December 2024, a Virginia court filed a complaint alleging the defendants used stolen customer information and special software to circumvent security measures and generate harmful content on the platform.

The company’s Digital Crimes Unit (DCU) has been fighting cybercrime for almost 20 years and led the investigation. The lawsuit accuses three main individuals of planning the scheme and others who shared stolen login information and assisted in unauthorized access.

The group used tools such as the “de3u” software and a reverse proxy service to manipulate Microsoft’s generative AI systems, which include OpenAI’s DALL-E.

“Every day, people use generative AI tools to improve their creativity and productivity,” said Steven Masada, assistant general counsel at Microsoft, in a company blog post. “Sadly, like with other new technologies, some people misuse these tools for harmful reasons.”

Microsoft understands that we are responsible for preventing the misuse of our tools as we and others in the industry create new features.

Microsoft’s complaint explains that the defendants used stolen API keys to illegally access the Azure OpenAI Service, bypassing security measures to stop such misuse. These keys, usually gained from security breaches or unauthorized access, allowed people to get around content safety filters.

This meant they could create and share harmful material. Other malicious users purchased the tools, including de3u, and received clear instructions on how to use them.

Microsoft found suspicious activity in July 2024 while checking unusual API uses. The investigation linked the stolen login information to customers in the U.S., including businesses in Pennsylvania and New Jersey.

The prosecution accuses the defendants of operating a hacking service that enables more individuals to utilize their tools via websites such as “rentry.org/de3u” and “aitism.net.”

The defendants’ tools allegedly included features that circumvented Microsoft’s safety mechanisms, such as content filtering systems designed to detect and block harmful prompts. The reverse proxy service sent harmful traffic through Cloudflare tunnels, making it harder to trace where the unauthorized activity came from.

Microsoft Stops Breach

Microsoft says it quickly dealt with the breach by canceling the hacked accounts and adding extra protections to improve its systems for the future. The company took control of websites and servers linked to the operation, helping them collect information and stop further abuse.

The lawsuit claims that the defendants broke several laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act.

Under Virginia law, there are extra claims for trespass to chattels and tortious interference. Microsoft wants compensation and a court order to make sure the defendants are responsible and to stop similar problems from happening again.

The Azure OpenAI Service has tools to filter content and detect abuse, helping to reduce the risks of misuse of generative AI. B bypassing these safety measures highlights the growing risks for platforms that provide advanced AI features.

In late 2024, the Capgemini Research Institute published a report revealing that 97% of the surveyed organizations experienced at least one generative AI-related security issue in the previous year. A study that surveyed 1,000 organizations in 13 countries found a significant increase in cybersecurity breaches.

More than 90% of people surveyed reported having at least one security breach in the last year, up from 51% in 2021. Nearly half of the organizations surveyed said they lost over $50 million in the last three years. This highlights the increasing risks linked to generative AI technologies and their misuse by malicious actors.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Nvidia CEO Jensen Huang Applauds Elon Musk’s AI Vision Previous post Nvidia CEO Jensen Huang Applauds Elon Musk’s AI Vision