Google's AI overview gives you wrong answers
Creator: DADO RUVIC | Credit: REUTERS

Google’s AI overview gives you wrong answers

Google I/O showcased AI-generated search result summaries. This caused a lot of controversy on social media because it showed misleading and false information, like right-wing conspiracy theories and wrong facts. Despite Google’s efforts to address the issue, questions remain regarding the accuracy and reliability of the AI feature rollout, prompting speculation about its potential future applications.

One of the most important new features that Google I/O introduced last week was AI-generated summaries of search results. Since then, users have scrutinized and joked about these AI-generated summaries of search results on social media, claiming that the feature displays misleading answers and even dangerous false information.

Many Google users, including journalists, have shared numerous instances of the “AI Overview” summary from dubious sources, such as humorous Reddit posts or the failure to recognize the falsity of Onion articles.

Melanie Mitchell, a computer scientist, provided an example of a feature that illustrates the right-wing conspiracy theory that President Barack Obama is Muslim. This seems to be an attempt to summarise a book from the Oxford University Press’ research platform that didn’t work. In other cases, the AI summary appears to copy text from blogs without removing or changing any references to the author’s children.

People on social media have shared many other examples of the platform getting basic facts wrong, like not naming any African countries that start with the letter K or saying that pythons are mammals, which Forbes was able to confirm. Other incorrect results that became viral, such as those about Obama or the use of glue on pizza, no longer display an AI summary. Instead, they show news stories about the problems with AI search. A representative for the company said that the mistakes were happening on “generally very uncommon queries” and weren’t typical of what most people go through.

AI Feature Rollout Faces Criticism

It is unclear what the problem is, whether it’s widespread, or whether Google will have to stop the rollout of an AI feature again. Language models, such as OpenAI’s GPT-4 or Google’s Gemini, which powers the search AI summary feature, sometimes see things that aren’t there. In this case, a language model makes up completely false information without giving any warning. This can happen in the middle of an otherwise correct text. But the AI feature might also be having trouble because of the data source Google uses: fake news stories on The Onion and trolling posts on social networks like Reddit. In an interview that came out this week in the Verge, Google CEO Sundar Pichai talked about hallucinations. He referred to them as an “unsolved problem” and declined to provide a timeline for their resolution.

Google’s second major AI launch of the year has faced criticism due to its inaccurate results. This year, the company released Gemini to the public as a competitor to OpenAI’s ChatGPT and DALL-E image generators. However, the image generation feature immediately faced criticism for producing historically inaccurate images of Black Vikings, Nazi soldiers of various races, and a female pope. Google temporarily stopped Gemini’s ability to create human images. After the fuss, Pichai wrote in an internal note that he knew some of Gemini’s answers “have offended our users and shown bias.” He also said, “To be clear, that’s completely unacceptable, and we got it wrong.”

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Robocaller Who Used AI to Clone Biden's Voice Fined $6 Million Previous post Robocaller Who Used AI to Clone Biden’s Voice Fined $6 Million
Scarlett Johansson contributed receipts to the OpenAI controversy Next post Scarlett Johansson contributed receipts to the OpenAI controversy