Microsoft bans US police from using face recognition enterprise AI tool
Creator: GONZALO FUENTES | Credit: REUTERS

Microsoft bans US police from using face recognition enterprise AI tool

Microsoft has made it clear again that U.S. police departments can’t use Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI tech to use generative AI for facial recognition.

Azure OpenAI Service’s terms of service clarified on Wednesday that U.S. police departments cannot use its integrations for facial recognition. This includes integrations with OpenAI’s current image-analyzing models and possibly future models.

There is a new bullet point that says “any law enforcement globally” can’t use “real-time facial recognition technology” on mobile cameras like body cameras and dashcams to try to figure out who someone is in “uncontrolled, in-the-wild” settings.

The policy changes come a week after Axon, a company that makes tech and weapons for the military and police, announced a new product that uses OpenAI’s GPT-4 generative text model to summarise audio from body cameras. Critics promptly highlighted potential issues, including the possibility of hallucinations (even the most advanced generative AI models today fabricate facts) and potential racial biases in the training data, particularly concerning given the disproportionate likelihood of police stops for people of colour compared to their white counterparts.

It’s unclear if Axon was using GPT-4 through the Azure OpenAI Service or if the new policy was in response to Axon’s product launch. Previously, OpenAI’s APIs restricted the facial recognition capabilities of its models. We have contacted Axon, Microsoft, and OpenAI.

Microsoft has some room to move with the new terms.

Police in the United States are the only ones who can’t use the Azure OpenAI service at all. Also, it doesn’t cover facial recognition done with stationary cameras in controlled settings, like a back office (though the terms say that U.S. police can’t use facial recognition at all).
This is in line with how Microsoft and its close partner OpenAI have recently approached contracts for AI-related work in law enforcement and defence.

Reports surfaced in January that OpenAI is collaborating with the Pentagon on various projects, including cybersecurity initiatives. This is different from the past when the startup said it wouldn’t give its AI to militaries. The Intercept reports that Microsoft has suggested using OpenAI’s DALL-E image generation tool to help the Department of Defence (DoD) make software for military operations.

In February, Microsoft’s Azure Government product added the Azure OpenAI Service. This added more compliance and management features designed for government agencies, such as law enforcement. Candice Ling, Microsoft’s SVP of government business, pledged in a blog post that the DoD would “submit for additional authorization” Azure OpenAI Service for workloads supporting DoD missions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Microsoft Invested in OpenAI Due to Google’s fears, email shows Previous post Microsoft Invested in OpenAI Due to Google’s fears, email shows
Warren Buffett Warns of AI Scams as Emerging Growth Industry Next post Warren Buffett Warns of AI Scams as Emerging Growth Industry