Elon Musk’s Grok AI has trouble with the accuracy issues in news reports
During a major news event, Elon Musk’s AI model, Grok, on the X platform, encountered numerous accuracy issues. It spread false headlines because it couldn’t tell the difference between sarcasm and false claims, which shows that AI-generated news needs human oversight.
Grok, Elon Musk’s AI model that is available on the X platform, had a lot of problems with accuracy after the attempted murder of former President Donald Trump.
The AI model posted false headlines, such as one that said Vice President Kamala Harris had been shot when she hadn’t, and another that said the shooter was an Antifa member when they weren’t.
These mistakes happened because Grok couldn’t tell the difference between sarcasm and claims about X that weren’t true.
After saying he would work on TruthGPT, Elon Musk has been pushing Grok as a revolutionary way to get news by using posts from millions of users in real-time.
Even though it had potential, the event showed Grok’s flaws, especially when handling breaking news.
The humorous design of the model can also pose a problem, as it can potentially propagate false information and confusion.
Using AI to summarise news stories makes people worry about their accuracy and lack of context, especially during important events.
Katie Harbath, who used to be the director of public policy for Facebook, stressed the need for human oversight when giving context and checking facts.
The problem with Grok is similar to problems other AI models have had, like OpenAI‘s ChatGPT, which has disclaimers to keep users from getting too excited.