This story just came across my desk. Apparently, the lawyers for the Chicago Housing Authority, Goldberg Segalla, have been very naughty boys and girls. The story is reported by Lizzie Kane at the Chicago Tribune:
Lawyers hired by the Chicago Housing Authority recently cited Illinois Supreme Supreme Court case Mack v. Anderson in an effort to persuade a judge to reconsider a jury’s $24 million verdict against the agency in a case involving the the alleged poisoning of two children by lead paint in CHA-owned property.
The problem?
The case doesn’t exist.
In the latest headache for CHA, law firm Goldberg Segalla used artificial intelligence, specifically ChatGPT, in a post-trial motion and neglected to check its work, court records show. A jury decided in January, after a roughly seven-week trial, that CHA must pay more than $24 million to two residents who sued on behalf of their children, finding the agency responsible for the children’s injuries, including past and future damages.
The firm apologized for the error in a June 18 court filing, calling it a “serious lapse lapse in professionalism.”
The Goldberg Segalla partner who used ChatGPT has since been terminated for violating a company policy barring the use of gAI. IMO the problem is greater than a violation of company policy—it was unethical. Although the Chicago Bar Association does not have a standalone policy on the use of gAI, its policies are based on American Bar Association Formal Opinion 512 which says, in part:
Because GAI tools are subject to mistakes, lawyers’ uncritical reliance on content created by a GAI tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties. Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation as required by Model Rule 1.1.1
and
Competent representation presupposes that lawyers will exercise the requisite level of skill and judgment regarding all legal work. In short, regardless of the level of review the lawyer selects, the lawyer is fully responsible for the work on behalf of the client.
or, in short, using ChatGPT uncritically, not informing the CHA of its use, and billing for any time spent in training or eliciting information from ChatGPT were all unethical.
I think that every attorney whose name was on any briefs filed with the court that used ChatGPT without disclosing that should be disbarred. Terminating a single attorney for violating company policy is insufficient.
I also think that ABA FO 512 is far too lenient in that I do not believe that lawyers are capable of assessing the reliability of gAI tools without professional advice any more than computer scientists are qualified to prepare legal opinions without professional legal advice.
I don’t oppose the use of gAi tools by attorneys but I do think that they should have guidance and assistance from qualified professionals, they should disclose the use to their clients, and they should not bill clients at their prevailing rates for using gAI tools.