A legal battle over Minnesota’s “Use of Deep Fake Technology to Influence an Election” law has taken an unexpected turn, raising questions about the role of artificial intelligence (AI) in its proceedings. Lawyers challenging the law have pointed out that an affidavit supporting the legislation appears to include text that AI might have generated. This revelation, reported by the Minnesota Reformer, suggests that AI tools like ChatGPT or other large language models (LLMs) may have played a role in creating parts of the document.
Evidence under scrutiny
The affidavit in question was submitted by Jeff Hancock, founding director of Stanford University’s Social Media Lab, at the request of Minnesota Attorney General Keith Ellison. However, the content of Hancock’s declaration has raised eyebrows, particularly its references to two studies that seem to be entirely fictitious.
One of the cited studies, The Influence of Deepfake Videos on Political Attitudes and Behavior, was allegedly published in 2023 in the Journal of Information Technology & Politics. However, a search for this study in that journal and elsewhere has yet to yield results. Another cited work, Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance, also appears non-existent. These inconsistencies suggest that an AI tool may have fabricated the sources.
The lawyers representing state Representative Mary Franson and conservative YouTuber Christopher Khols (known online as Mr Reagan) expressed their concerns in a legal filing. They stated, “The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT.”
Implications for the affidavit’s credibility
The suspicious citations cast doubt on the reliability of Hancock’s affidavit. The filing from Franson and Khols’ legal team argued that the apparent AI-generated sources undermine the credibility of the entire document. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever,” the filing noted.
This revelation has added complexity to an already contentious case, which focuses on regulating the use of deepfake technology in elections. Deepfakes, which use AI to create realistic but fabricated videos, are a growing concern due to their potential to spread misinformation and manipulate public opinion.
Broader questions about AI in legal processes
This case highlights the challenges posed by the increasing reliance on AI in various fields, including legal and academic work. While AI tools like ChatGPT can assist with drafting documents and generating ideas, they are flexible and may produce inaccurate or entirely fictional information. Such errors can have serious implications, particularly in legal proceedings where accuracy is paramount.
The Minnesota case demonstrates the importance of verifying AI-generated information before it is used in critical contexts. As the legal challenge progresses, the role of AI in creating Hancock’s affidavit will likely remain a point of contention, potentially influencing the court’s perception of the evidence.