A leading misinformation expert has admitted that he used ChatGPT to assist with drafting a legal document, which led to errors that critics say undermined the filing’s reliability. Jeff Hancock, founder of the Stanford Social Media Lab, acknowledged the mistakes but insisted they did not affect the document’s core arguments.
The case and the controversy
Hancock’s affidavit was submitted to support Minnesota’s “Use of Deep Fake Technology to Influence an Election” law, which is currently under challenge in federal Court. The law is being contested by Christopher Khols, a conservative YouTuber known as Mr. Reagan, and Minnesota state Representative Mary Franson. Their legal team flagged the filing, alleging that some of its citations didn’t exist and calling the document “unreliable.”
In response, Hancock filed a follow-up declaration admitting to using ChatGPT to help organise his sources. While he denies using the AI tool to write the document itself, he conceded that errors in the citation process were introduced due to the AI’s so-called “hallucinations.”
Hancock’s defence
In his latest statement, Hancock defended the overall integrity of his filing. “I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it,” he said. He emphasised that his arguments were based on the most up-to-date academic research and reflected his expert opinion on how artificial intelligence influences misinformation.
Hancock explained that he used Google Scholar and GPT-4 to identify relevant research articles. While this process aimed to combine his existing knowledge with new insights, it inadvertently led to two non-existent citations and one with incorrect authors.
Regret but no intent to mislead
Hancock expressed remorse for the errors, stating, “I did not intend to mislead the Court or counsel. I express my sincere regret for any confusion this may have caused.” However, he firmly stood by the document’s main points, asserting that the errors do not diminish the substance of his expert opinion.
The incident highlights ongoing concerns about the risks of relying on AI tools in sensitive contexts. Although such tools can speed up research and drafting, they can also generate errors that compromise the credibility of the work they support.
As the legal challenge progresses, it remains unclear how the Court will view Hancock’s affidavit and whether the acknowledged errors will impact the case.