Microsoft Launches New 'Correction' Tool to Fix Inaccurate AI Content
Image Credit: theaiwired

Microsoft Launches New ‘Correction’ Tool to Fix Inaccurate AI Content

Microsoft has added a new tool called “Correction” that can automatically find and fix AI-made false information. The goal of this tool is to fix the problem of AI hallucinations and make material made by AI more accurate.

AI chatbots like Gemini, Copilot, and ChatGPT are making it easier for people to get information quickly. However, a major problem with generative AI language models has been in the spotlight ever since AI chatbots like ChatGPT came out in 2022. This problem is that AI models sometimes see things that aren’t there or give false information. Microsoft has announced a new feature called “Correction” that will, according to the company, instantly find and fix false information made by AI.

The Azure AI Content Safety API from Microsoft includes this new tool. Microsoft says that the new feature is meant to find and fix AI-generated information that is truly wrong or misleading. In a blog post, Microsoft explained that the Groundedness detection feature of Microsoft Azure AI Content Safety has a tool called Correction that helps fix problems with hallucinations in real time, before users even notice them.

When these big language models make text and other content that seems real but isn’t based on facts or isn’t important, this is called an AI hallucination. This is because these language models use trends learnt from huge amounts of data to statistically guess the next word in a string. AI models can only think about the data they were taught on and can’t think for themselves. This means they don’t naturally understand facts and may give answers that sound right but aren’t based in reality.

How Microsoft’s Correction Tool Fixes AI Mistakes

Microsoft’s new Correction tool is meant to fix the problem of hallucinations and the false information they can cause. This feature deals with AI dreams by using a classifier model to mark pieces of text that might be wrong or made up by AI. If hallucinations are found, a second model using both small and large language models tries to fix the mistakes by matching the text with checked information, which are called “grounding documents.”

A new method uses small and big language models to make sure that outputs are in line with supporting documents. This is what makes Microsoft’s new Correction tool work. A Microsoft representative told TechCrunch, “We hope this new feature helps developers and users of generative AI in fields like medicine, where developers must make sure the accuracy of responses.”

Companies can use Microsoft’s new tool for fixing mistakes with any AI model that creates text, such as Meta’s Llama or OpenAI’s GPT-4. Along with the correction feature, Microsoft has also released a set of changes that are meant to make AI systems safer, more private, and more secure. The company has grown its Secure Future Initiative (SFI), which is based on three main ideas: secure processes, secure by design, and secure by default. This includes the release of new Evaluations in Azure AI Studio that help with proactive risk assessments and updates to Microsoft 365 Copilot that make web queries more clear so users can see how search data affects Copilot answers.

As another way to handle privacy concerns, Microsoft is adding confidential inferencing to its Azure OpenAI Service Whisper model. This feature makes sure that private and sensitive customer data stays safe and secure during the reasoning process.

Leave a Reply

Your email address will not be published. Required fields are marked *

Google Photos Unveils Powerful AI Video Editing Tools for Creators Previous post Google Photos Unveils Powerful AI Video Editing Tools for Creators