Tsk, Tsk

This story just came across my desk. Apparently, the lawyers for the Chicago Housing Authority, Goldberg Segalla, have been very naughty boys and girls. The story is reported by Lizzie Kane at the Chicago Tribune:

Lawyers hired by the Chicago Housing Authority recently cited Illinois Supreme Supreme Court case Mack v. Anderson in an effort to persuade a judge to reconsider a jury’s $24 million verdict against the agency in a case involving the the alleged poisoning of two children by lead paint in CHA-owned property.

The problem?

The case doesn’t exist.

In the latest headache for CHA, law firm Goldberg Segalla used artificial intelligence, specifically ChatGPT, in a post-trial motion and neglected to check its work, court records show. A jury decided in January, after a roughly seven-week trial, that CHA must pay more than $24 million to two residents who sued on behalf of their children, finding the agency responsible for the children’s injuries, including past and future damages.

The firm apologized for the error in a June 18 court filing, calling it a “serious lapse lapse in professionalism.”

The Goldberg Segalla partner who used ChatGPT has since been terminated for violating a company policy barring the use of gAI. IMO the problem is greater than a violation of company policy—it was unethical. Although the Chicago Bar Association does not have a standalone policy on the use of gAI, its policies are based on American Bar Association Formal Opinion 512 which says, in part:

Because GAI tools are subject to mistakes, lawyers’ uncritical reliance on content created by a GAI tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties. Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation as required by Model Rule 1.1.1

and

Competent representation presupposes that lawyers will exercise the requisite level of skill and judgment regarding all legal work. In short, regardless of the level of review the lawyer selects, the lawyer is fully responsible for the work on behalf of the client.

or, in short, using ChatGPT uncritically, not informing the CHA of its use, and billing for any time spent in training or eliciting information from ChatGPT were all unethical.

I think that every attorney whose name was on any briefs filed with the court that used ChatGPT without disclosing that should be disbarred. Terminating a single attorney for violating company policy is insufficient.

I also think that ABA FO 512 is far too lenient in that I do not believe that lawyers are capable of assessing the reliability of gAI tools without professional advice any more than computer scientists are qualified to prepare legal opinions without professional legal advice.

I don’t oppose the use of gAi tools by attorneys but I do think that they should have guidance and assistance from qualified professionals, they should disclose the use to their clients, and they should not bill clients at their prevailing rates for using gAI tools.

13 comments… add one
  • Drew Link

    Boy.. Agree. would love PD’s thoughts.

  • PD Shaw Link

    There have been some Big Law cases like these lately and my main question is what did the lawyer bill the client? Did he bill the actual hours utilizing the GAI tools, or did he bill the time he believes the work would have taken to perform without the GAI tools. Because lawyers in these types of firms have hourly billing quotas or similar “incentives,” they don’t necessarily have incentives towards labor saving tools. IOW, did the time-savings from the technology accrue to the benefit of the lawyer or the client? That ABA opinion talks about only billing “direct costs” to the client, or getting a client’s agreement in advance about such charges.

    I think some of the larger issues are about client confidentiality. Post-trial motions probably wouldn’t necessarily have that issue, but there is not a lot of reliable information about what information given to a GAI meets the ethical rules. (I’ve seen a similar discussion from a lawyer advising hospitals. The technology is particularly valuable to ER docs being able to access more specialized knowledge in time-constrained settings, but can their input violate HIPAA?)

  • bob sykes Link

    This has happened before to other lawyers. But I wonder whether AI programs would produce fiction in other kinds of applications, like medical diagnosis, structural analysis and design, logistics, tactical or strategical operations… How does one check for that?

    Civil engineers have long used very sophisticated CAD programs to analyze structures that cannot be analyzed any other way. But there is no way to completely check the computer program results other than designer intuition and some rough and ready paper and pencil calculations. Designers normally account for uncertainty by allowing for much larger than expected loads. This results in more material being used and heavier connections between parts, but the motivation to use CAD is exactly to minimize the extral materials.

  • But I wonder whether AI programs would produce fiction in other kinds of applications, like medical diagnosis, structural analysis and design, logistics, tactical or strategical operations…

    I’m confident there is a risk of that. That’s why training of AI for these purposes should not be left in the hands of single individuals, should be reviewed and verified, should be tested thoroughly.

  • PD Shaw Link

    The ABA opinion isn’t an ethical standard, it’s more of rumination on how existing rules and laws can apply to the technology. The rule at issue here is that the attorneys’ signature on a court filing constitutes a certification that the legal contentions are based upon existing law or nonfrivolous extensions thereof. Citing non-existing legal authority is a clear violation of that obligation and there is no excuse that a third-party was the source (an associate, paralegal, or AI) because the certification imposes the obligation on the signees.

    Violating this rule makes the lawyer subject to sanctions. Eugene Volokh has posted on a lot of these, and I think generally what happens is that the judge issues a rule to show cause why sanctions should not be imposed. Lawyers then explain themselves, and the Judge will probably strike the filing with leave to refile (not wanting to harm the innocent client) and award opposing counsel’s costs in bringing the motion. (If it was discovered by the judge’s office, the judge might impose a comparable fine) I don’t think it’s likely that the judge refers the issue to the attorney licensing authority unless there is an issue of candor to the judge in the explanation. The judge might require notice to the client and may even send a transcript to the client if there are possible issues with improper billing.

    If the lawyers in these cases had simply done what opposing counsel and possibly also the judge’s clerks would foreseeably do and look up the case, none of this would have happened.

  • If the lawyers in these cases had simply done what opposing counsel and possibly also the judge’s clerks would foreseeably do and look up the case, none of this would have happened.

    Agreed. I presume that the reason that was not done was because the attorney involved merely assumed that ChatGPT wouldn’t lie (which sort of substantiates the argument I’m making—the decision can’t be left to a single individual’s discretion).

    But something else that concerned me was the possibility that the practice’s policy against gAI may have actually deterred the partner that used ChatGPT from verifying the cases independently. If she used the practice’s subscription could that search have been recorded and, possibly, made available to the senior partners?

  • PD Shaw Link

    There was another lawyer sanctioned in Chicago (bankruptcy court) this week. The judge’s order:

    “At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud. This has been a hot topic in the legal profession since at least 2023, exemplified by the fact that Chief Justice John G. Roberts, Jr. devoted his 2023 annual Year-End Report on the Federal Judiciary (in which he “speak[s] to a major issue relevant to the whole federal court system,” to the risks of using AI in the legal profession, including hallucinated case citations.

    “Counsel’s professed ignorance of the dangers of using ChatGPT for legal research without checking the results is in some sense irrelevant. Lawyers have ethical obligations not only to review whatever cases they cite (regardless of where they pulled them from), but to understand developments in technology germane to their practice. And there are plenty of opportunities to learn . . .”

    https://reason.com/volokh/2025/07/19/any-lawyer-unaware-that-generative-ai-research-is-playing-with-fire-is-living-in-a-cloud/#more-8342105

  • PD Shaw Link

    Since 2023, judges all over the country have been issuing standing orders prohibiting the use of GAI in their cases. Some orders require the attorney to certify that citations were checked for authenticity against traditional sources. Some require certification that the use of GAI was consistent with rules protecting client confidence. Some orders require attorneys to inform the court of any use of GAI in cases before them. Here is a list of standing orders in state and federal courts in Cook County:

    https://www.ropesgray.com/en/sites/artificial-intelligence-court-order-tracker/states/illinois

  • TastyBits Link

    Basically, they could have easily verified the AI output, but They refused to do so. What happens when AI is verifying AI output.

    A lot of what is called AI is aggregated Google searches. There is no thinking involved. The other thing called AI is spell-check, grammar-check, math-check, etc. check. It simply compares the human’s output against a set of rules.

    Hence, AI can never explain Finnegans Wake.

  • steve Link

    I am sure something similar will happen in medicine if it hasn’t already happened. To date, talking with my colleagues still in practice, there is still a lot of concern about turning stuff over to AI and still consider it a useful aid.

    Steve

  • Hence, AI can never explain Finnegans Wake.

    I doubt that James Joyce could explain Finnegans Wake

  • TastyBits Link

    If I am not mistaken, his editor did not know what was mistakes and what was intentional, and I agree that he probably had no idea. I do like how the last sentence wraps to the first. I have been able to get through a fourth, and I had to read it with an Irish brogue. Also, I used the song as a guide.

    Back on-topic:
    I found this interesting, and it mostly mirrors my thinking. AI is a regression algorithm with no exit condition. It is a snake eating its tail.
    Grappling With Existential Panic Over AI

  • My mom spoke with a very faint brogue—I suspect it was the consequence of having a father who was a professional Irishman (and was born in the third quarter of the nineteenth century)

Leave a Comment