A close reading of the “AI” fake cases judgment

7 thoughts on “A close reading of the “AI” fake cases judgment”

  1. I agree with all of this.

    As a specialist in the field, I am driven to say that there are real cases supporting all of the claimant’s points except for one – the claimant argued that provision of interim accommodation pending review under section 188(3) Housing Act 1996 was a mandatory duty, and cited the fake case of R (on the application of El Gendi) v Camden London Borough Council in support. This is just wrong, it is a discretion, not a duty and there are no cases to support that point.

    My account of this judgment, with even more housing law, is here – https://nearlylegal.co.uk/2025/05/the-cases-that-werent/

  2. Even legal textbooks can wrongly describe what a case decided, so this case also constitutes a warning to any advocate against stacking a document with unnecessary authorities said to support fairly run-of-the-mill propositions and not checking the authority first.

    I suspect that the essential irrelevance of the cases was why nobody bothered tracking them down for months before the hearing, and that if they had been vital to the case or cited for any challengeable proposition this would either not have happened or played out differently.

    The case of Roberto Mata v. Avianca decided in the Southern District of New York in 2023 was a pretty egregious example of citation of fake AI authorities – https://storage.courtlistener.com/recap/gov.uscourts.nysd.575368/gov.uscourts.nysd.575368.54.0_8.pdf

  3. Is there a danger that non-existent “cases” created by AI/LLMs that are used in cases but are not detected at the time may then begin to be findable in case law through normal research/search methods?

  4. “This case is now being used as morality story – to warn lawyers not to rely on AI in their legal research.”

    What I say to this emphatically is “of course”. Of course lawyers need to be careful, more so than any other profession, not to rely on generated content. They are officers of the court, and litigation involves findings of fact. Lawyers need to be meticulous to the nth degree.

    What you suggest about the likelihood of other erroneous citations going unnoticed ought to be alarming. Those citations are apt to be recycled, and their presence in existing authority would give them weight. There’s no excuse. Lawyers have access to a LexisNexis. Use that. That’s what it’s for.

    I notice that the LexisNexis site has something on it about AI. Let’s hope it wasn’t a hallucination there. That might be a reasonable excuse. Surely, though… surely they have carefully designed it to guard against this sort of thing when searching for citations.

  5. I use ChatGPT as a research tool. You can give it standing instructions. My standing instructions are that it must provide a link to each site it quotes. You may not be surprised to hear that as well as creating fake case names, it can create fake links. Yesterday it gave me links to a couple of cases on BAILLI that did indeed take me to BAILLI but to a page saying Not Found.

    AS DAG says, ChatGPT is *superficially* like a search engine. But it’s not a search engine. I still, however, find it a really useful tool. I can give it a vague description of a half-remembered case and it will usually make a better shot at finding it than I would with Google. The key thing is that you have to check its results. If you have asked it to provide the addresses of each site it references, it takes very little time to click on each one and make sure that 1. the case is real and 2. it has interpreted it correctly.

  6. It seems to me that the term “AI” is a bit of a misnomer. The large language model may be artificial, but it has no intelligence – it knows nothing about the world in general or the law in particular. They are essentially statistical models designed to analyse how language works and produce new text according to their algorithmic rules.

    It is nothing sort of remarkable that large language models can generate text that appears so plausible in the first place. Like Samuel Johnson’s quote about a dog walking on his hind legs – you may be surprised to see it done well, but you should be shocked to find it done at all.

    The likelihood is that the models today are as bad as they will ever be, and they will only get better in future. For some tasks, they are already as good if not better than people. Who wants to read and summarise a million pages of due diligence materials or disclosed evidence? Even if you had an unlimited budget, people and time, people make mistakes too. As always, trust, but verify.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.