Lawyer faces sanctions for having used ChatGPT for a brief

Artificial intelligence invented legal precedents for the brief filed by the lawyer, who could now even be expelled from the profession.

An American lawyer is facing possible sanctions after he used the popular ChatGPT to draft a brief only to discover that the artificial intelligence (AI) application had invented a whole series of alleged legal precedents.

As published this Saturday by The New York Times , the lawyer in trouble is Steven Schwartz, a lawyer in a case that is being resolved in a New York court, a lawsuit against the airline Avianca filed by a passenger who claims he suffered an injury while being hit with a service cart during a flight

Schwartz represents the plaintiff and used ChatGPT to prepare a brief opposing a defense request to have the case dismissed.

Everything was invented by the chatbot

In the ten-page document, the lawyer cited several judicial decisions to support his theses, but it was soon discovered that the well-known chatbot from the OpenAI company had invented them.

“The Court is faced with an unprecedented situation. A submission submitted by the plaintiff’s attorney in opposition to a motion to dismiss (the case) is replete with citations to non-existent cases,” Judge Kevin Castel wrote this month.

This Friday, Castel issued an order calling a hearing on June 8 in which Schwartz must try to explain why he should not be sanctioned after having tried to use completely false precedents.

He did so one day after the lawyer himself submitted an affidavit in which he admitted to having used ChatGPT to prepare the brief and acknowledged that the only verification he had carried out was to ask the application if the cases he cited were real.

Schwartz justified himself by assuring that he had never used a tool of this type and that, therefore, “he was not aware of the possibility that its content could be false.”

The lawyer stressed that he had no intention of misleading the court and fully exonerated another lawyer from the firm who is also exposed to possible sanctions.

The document closes with an apology in which Schwartz deeply regrets using artificial intelligence to support his research and vows never to do so again without fully verifying its authenticity.