OpenAI’s ChatGPT Sparks Controversy in Legal Filing with Fabricated Citations
OpenAI’s ChatGPT was used by a Stanford misinformation expert, who admits leveraging the AI tool but denies knowing it could generate false details.
OpenAI’s ChatGPT is transforming the way content is created and presented, streamlining tasks and enhancing efficiency. However, its growing adoption comes with challenges—chief among them is the risk of “hallucinations,” where AI produces inaccurate or fabricated information.
Recently, it was ironic that Jeff Hancock, a professor at Stanford and an expert on fake news, got in trouble for sending a legal document with fake citations. The goal of the case was to back Minnesota’s law that says deepfake technology can’t be used to change elections. But the filing itself had mistakes made by AI, which made it less trustworthy.
Hancock has since admitted that he used OpenAI’s ChatGPT-4 to order the document’s citations, but he said he didn’t know the tool could make up details that weren’t true. He made it clear that some parts of the filing were written without AI’s help and said the mistakes were not meant to be there.
In a separate declaration, Hancock said, “I did not mean to mislead the Court or counsel.” “I am truly sorry for any confusion this may have caused.” Anyhow, I strongly agree with everything that is written in the statement.
The debate shows the risks of using AI tools like ChatGPT in legal situations where precision is very important. Hancock’s case shows how important it is to have strict checks when using AI in professional and high-stakes settings.
This event is a stark reminder of AI’s limits and the need for human oversight to ensure reliability as it continues to be used in important tasks.