Friday, 28 November 2025
27.5 C
Singapore
26.2 C
Thailand
25.8 C
Indonesia
28 C
Philippines

Misinformation researcher admits AI errors in court filing

Misinformation expert Jeff Hancock admits AI errors in a court filing, defends arguments, and regrets citation mistakes caused by ChatGPT.

A leading misinformation expert has admitted that he used ChatGPT to assist with drafting a legal document, which led to errors that critics say undermined the filing’s reliability. Jeff Hancock, founder of the Stanford Social Media Lab, acknowledged the mistakes but insisted they did not affect the document’s core arguments.

The case and the controversy

Hancock’s affidavit was submitted to support Minnesota’s “Use of Deep Fake Technology to Influence an Election” law, which is currently under challenge in federal Court. The law is being contested by Christopher Khols, a conservative YouTuber known as Mr. Reagan, and Minnesota state Representative Mary Franson. Their legal team flagged the filing, alleging that some of its citations didn’t exist and calling the document “unreliable.”

In response, Hancock filed a follow-up declaration admitting to using ChatGPT to help organise his sources. While he denies using the AI tool to write the document itself, he conceded that errors in the citation process were introduced due to the AI’s so-called “hallucinations.”

Hancock’s defence

In his latest statement, Hancock defended the overall integrity of his filing. “I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it,” he said. He emphasised that his arguments were based on the most up-to-date academic research and reflected his expert opinion on how artificial intelligence influences misinformation.

Hancock explained that he used Google Scholar and GPT-4 to identify relevant research articles. While this process aimed to combine his existing knowledge with new insights, it inadvertently led to two non-existent citations and one with incorrect authors.

Regret but no intent to mislead

Hancock expressed remorse for the errors, stating, “I did not intend to mislead the Court or counsel. I express my sincere regret for any confusion this may have caused.” However, he firmly stood by the document’s main points, asserting that the errors do not diminish the substance of his expert opinion.

The incident highlights ongoing concerns about the risks of relying on AI tools in sensitive contexts. Although such tools can speed up research and drafting, they can also generate errors that compromise the credibility of the work they support.

As the legal challenge progresses, it remains unclear how the Court will view Hancock’s affidavit and whether the acknowledged errors will impact the case.

Hot this week

Apple to prioritise performance and AI upgrades in iOS 27

Apple is expected to focus on performance improvements and stronger AI features in iOS 27, shifting from major redesigns to software refinement.

OnePlus confirms 15R launch date as part of three-device announcement

OnePlus confirms the 17 December launch of the 15R, Watch Lite, and Pad Go 2, with UK pre-order discounts and added perks.

Crunchyroll brings world-first premieres and major anime showcases to AFA Singapore 2025

Crunchyroll brings exclusive premieres, guest panels and a large interactive booth to AFA Singapore 2025.

Asia’s boards place AI and digital transformation at the top of 2026 priorities

Nearly half of Asia’s governance leaders plan to prioritise AI in 2026 as digital transformation reshapes board agendas.

OpenAI was blocked from using the term ‘cameo’ in Sora after a temporary court order

A judge blocks OpenAI from using the term “cameo” in Sora until 22 December as Cameo pursues its trademark dispute.

AMD powers Zyphra’s large-scale AI training milestone

Zyphra trains its ZAYA1 foundation model entirely on AMD hardware, marking a major step for large-scale AI development.

Honor launches Magic8 Pro in Singapore with new MagicBook Art 14 and Watch Fit

Honor launches the Magic8 Pro in Singapore with upgraded imaging, AI features and companion devices including the MagicBook Art 14 and Watch Fit.

The forgotten battle royale that ended a studio still deserved more than a one-month run

A look back at Radical Heights, the short-lived battle royale that showed promise but shut down after just one month.

Google limits free Nano Banana Pro image generation due to high demand

Google is reducing free Nano Banana Pro and Gemini 3 Pro usage due to high demand, limiting daily access while paid plans remain unchanged.

Related Articles

Popular Categories