Technology

Stanford University professor Jeff Hancock accused of using AI to cite fake study

Split image of phone, Stanford

Join Fox News for access to this content

You have reached your maximum number of articles. Log in or create an account FREE of charge to continue reading.

By entering your email and pushing continue, you are agreeing to Fox News’ Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive.

Please enter a valid email address.

A Stanford University “misinformation expert” has been accused of using artificial intelligence (AI) to craft testimony later used by Minnesota Attorney General Keith Ellison in a politically-charged case.

Jeff Hancock, a professor of communications and founder of the vaunted school’s Social Media Lab, provided an expert declaration in a case involving a satirical conservative YouTuber named Christopher Kohls. The court case is about Minnesota’s recent ban on political deepfakes, which the plaintiffs argue is an attack on free speech.

Hancock’s testimony was submitted to the court by Ellison, who is arguing in favor of the law. Hancock is “well-known for his research on how people use deception with technology, from sending texts and emails to detecting fake online reviews,” according to Stanford’s website.

But the plaintiff’s lawyers have asked the Minnesota federal judge hearing the case to dismiss the testimony, charging that Hancock cited a fake study.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

A Stanford professor is accused of using an AI language model to write an expert declaration. (Getty Images)

“[The] Declaration of Prof. Jeff Hancock cites a study that does not exist,” lawyers argued in a recent 36-page memo. “No article by the title exists.”

The “study” was called “The Influence of Deepfake Videos on Political Attitudes and Behavior” and was purportedly published in the Journal of Information Technology & Politics. The Nov. 16 filing notes that the journal is authentic, but had never published a study by that name.

“The publication exists, but the cited pages belong to unrelated articles,” the lawyers argued. “Likely, the study was a ‘hallucination’ generated by an AI large language model like ChatGPT.”

“Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question, especially when much of the commentary contains no methodology or analytic logic whatsoever.”

The document also calls out Ellison, arguing that “the conclusions that Ellison most relies on have no methodology behind them and consist entirely…

Click Here to Read the Full Original Article at FOX News : Tech…