In a recent legal case involving Avianca, attorney Steven A. Schwartz acknowledged in a sworn statement that he had utilized OpenAI's chatbot, ChatGPT, for his research. However, it was later revealed that the citations provided by ChatGPT were fraudulent when the opposing counsel highlighted that the referenced cases did not exist. US District Judge Kevin Castel verified that six of the submitted cases contained fabricated judicial decisions, quotes, and internal citations. As a result, the judge has scheduled a hearing to consider potential sanctions against the plaintiff's legal team.
Schwartz stated that he had questioned the chatbot regarding the accuracy of its information. Upon requesting a source, ChatGPT apologized for any confusion and maintained that the cited case was authentic. Additionally, it claimed that the other cases it referenced were genuine. Schwartz expressed his lack of awareness regarding the possibility of false content generated by the chatbot and deeply regretted relying on generative artificial intelligence without ensuring its authenticity.
He pledged to never employ such AI tools in the future without thorough verification. In a separate incident, ChatGPT mistakenly included the name of a law professor, whose identity has been withheld, in a research study on legal scholars accused of past sexual harassment. The professor was taken aback by the false accusation and used Twitter to express his shock and clarify that the information provided by ChatGPT was incorrect.
© Copyright 2023. All Rights Reserved Powered by Vygr Media.