The Upper Tribunal has ruled that uploading client documents to ChatGPT destroys legal privilege and breaches confidentiality. Munir v SSHD [2026] UKUT 81 (IAC) is the first UK tribunal decision to draw a clear line between open-source and closed-source AI — and the consequences for practitioners who end up on the wrong side of that line are severe.
Disclosure: This article is published by Search the Law, a commercial legal research platform. Where the article references legal research tools, readers should be aware of that connection. The legal analysis in this article has been independently verified against the primary sources cited.
What Happened
On 17 November 2025, a three-judge panel of the Upper Tribunal (Immigration and Asylum Chamber) handed down a decision in two consolidated cases that will reshape how every law firm in England and Wales thinks about artificial intelligence. The decision in UK and R (on the application of Munir) v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC) addresses two separate failures — one involving an adviser who self-reported, and another involving a solicitor who was referred to the SRA — but its broader guidance on AI, privilege, and supervision applies to every regulated legal professional in the country.
The cases arose under the Tribunal’s Hamid jurisdiction, the inherent power of a superior court of record to investigate whether representatives appearing before it have maintained proper professional standards. Both involved fictitious case citations placed before the Upper Tribunal. Both involved the actual or suspected use of AI large language models. But the Tribunal’s conclusions went far beyond hallucinated citations — into territory that affects any practitioner who has ever pasted client material into a chatbot.
The First Case: TMF Immigration Lawyers
Mr Tahir Mohammed, a level 3 IAA-accredited adviser at TMF Immigration Lawyers, drafted grounds of appeal to the Upper Tribunal that cited a case called Horleston v SSHD [2007] EWCA Civ 654. The case does not exist. The citation actually belongs to South Tyneside Metropolitan Borough Council v Anderson & Others, an equal pay case with no connection to immigration law.
When the Upper Tribunal issued a show cause notice, Mr Mohammed initially stated that no AI had been used and that the error was the result of “human error.” He later filed a witness statement acknowledging that, having reviewed his browsing history, he could not explain how the fictitious case appeared and could not dismiss the possibility that it was AI-generated. His conclusion: it “occurred unknowingly.”
The Tribunal investigated further at the oral hearing. Mr Mohammed confirmed that while he had not used ChatGPT to draft grounds of appeal, he had used it for other tasks at the firm — uploading Home Office decision letters to summarise them for clients, and pasting client emails into ChatGPT to improve the drafting. He told the Tribunal that he had only understood the risks of this since receiving the show cause notice.
Critically, Mr Mohammed had already self-reported to both the IAA and the SRA before the hearing. The Tribunal confirmed that no referral was necessary given this self-reporting, but made clear that a referral would have been made had he not done so himself. The Panel also noted that the danger is not confined to generative AI models like ChatGPT — by posing the same question to Google AI in slightly different ways, the judges were able to elicit various different compositions of the bench supposedly deciding the fictitious Horleston case. Each suggested judge had been sitting in the Court of Appeal at the relevant time, but none could have sat on a case that never existed.
The Second Case: CLP Solicitors
The second case was far more serious. Zubair Rasheed, the senior solicitor and Compliance Officer for Legal Practice (COLP) at City Law Practice Solicitors and Advocates, signed the UTIAC1 claim form and statement of truth in judicial review proceedings brought on behalf of Fiza Munir. The grounds of review ran to twenty pages and contained four false citations, identified by Upper Tribunal Judge Blundell when refusing permission.
The fictitious authorities included R (Dzineku-Liggison) v SSHD with an incorrect High Court citation (the real case is an Upper Tribunal decision — in fact, one of Judge Blundell’s own), Patel (mandatory refusal – fairness) attributed to the Court of Appeal when it is a UT decision, R (Muhandiramge) v SSHD with a non-existent Administrative Court citation, and OE (Nigeria) with a citation that simply does not exist.
Mr Rasheed stated that the grounds had been drafted by Waheed Malik, described initially as a “part-time trainee lawyer” and later revealed to be a “very junior caseworker” — and Mr Rasheed’s brother. The explanation offered was that Mr Malik had relied on “an outdated precedent on our system, practitioner blogs and personal notes” and had not verified references against official sources.
The Tribunal found Mr Rasheed’s evidence deeply unsatisfactory. He was hesitant to confirm Mr Malik’s whereabouts, initially reluctant to reveal the family relationship, unable to produce case management records (the spreadsheet entry for the case had been deleted, with shifting explanations for when and why), and unable to identify how many other cases Mr Malik had worked on or whether those files contained similar errors. The Tribunal noted that Mr Rasheed appeared to be “tailoring his answers to minimise our concerns.”
The Panel referred Mr Rasheed to the SRA.
The Two Cases Compared
| TMF Immigration (Mr Mohammed) | CLP Solicitors (Mr Rasheed) | |
|---|---|---|
| Role | Level 3 IAA-accredited adviser | Senior solicitor and COLP |
| False citations | 1 (Horleston v SSHD) | 4 (Dzineku-Liggison, Patel, Muhandiramge, OE Nigeria) |
| Explanation | Probable inadvertent AI use; could not explain from browsing history | “Outdated precedent” and junior caseworker drafted grounds |
| AI use admitted | Yes — ChatGPT for client emails and summarising decision letters | Denied any AI use at the firm |
| Cooperation | Full — self-reported to IAA and SRA before hearing | Evasive evidence; shifting explanations; deleted records |
| Supervision issue | Solo practitioner — own work | Brother (“very junior caseworker”) drafted JR grounds unsupervised |
| Outcome | No referral (already self-reported) | Referred to SRA |
The Privilege Ruling
The Tribunal’s most consequential finding had nothing to do with hallucinated citations. At paragraphs 21 and 60, the judges stated in terms that placing client correspondence and Home Office decision letters into “an open-source AI tool, such as ChatGPT” is to place that information “on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege.” Any regulated professional or firm that does so would need to notify their regulator and should consult with the Information Commissioner’s Office.
A note on terminology: the Tribunal used the phrase “open-source” to describe public-facing consumer AI platforms such as ChatGPT. This is technically imprecise — in software engineering, “open-source” means the source code is publicly available (as with Meta’s LLaMA or Mistral), whereas ChatGPT is a proprietary product operated by OpenAI on its own servers. The distinction the Tribunal drew is between tools that process user data on external servers outside the user’s control and tools that process data within the user’s enterprise environment. This article uses the Tribunal’s language in direct quotations but refers to “public AI platforms” and “enterprise AI tools” in its own analysis.
This is a finding of extraordinary reach. Legal professional privilege — which protects communications between lawyer and client made for the purpose of giving or receiving legal advice — is a fundamental right. The Supreme Court in R (Prudential plc) v Special Commissioner of Income Tax [2013] UKSC 1 described it, in the context of considering whether the privilege extends beyond lawyers, as a fundamental condition on which the administration of justice rests. Once waived, it cannot be reclaimed. A document that has entered the public domain has lost its privilege permanently.
The Tribunal’s reasoning turns on the architecture of the AI tool. Public AI platforms like ChatGPT process user inputs on external servers. Data submitted to them may be used to train future models, may be accessible to the platform operator, and — in the Tribunal’s analysis — is placed in the public domain. The Courts and Tribunal Judiciary guidance issued in October 2025 used comparable language, stating that information entered into a public AI chatbot should be treated as “published to the world.”
But the Tribunal drew an explicit distinction. Enterprise AI tools “which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks.” This is the first time a UK court or tribunal has drawn a clear architectural line between AI tools that are acceptable for use with privileged material and those that are not.
The Supervision Principle
The Tribunal was emphatic that the cases were not merely about AI. At paragraph 37: “it is principally about supervision and the obligation to ensure that the tribunal is not misled. It matters not how such citation errors come about. Whether they are inserted by a hapless trainee or by ChatGPT is really neither here nor there.”
The critical principle is at paragraph 38: a supervisor who fails to ensure that the work of a more junior fee-earner is free from false citations “is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such hallucinations.” The reasoning is that the supervisor fails not only the tribunal, the public, and the client, but also fails to aid the development of more junior lawyers.
This has direct implications for every firm that delegates legal research or drafting to trainees, paralegals, or caseworkers. The COLP or supervising solicitor cannot plead ignorance of how their staff produced the work. They are expected to know that anyone with access to Google has access to AI, and to have put appropriate checks in place. Mr Rasheed’s claim that he was not a “tech savvy person” was dismissed as no explanation at all.
The New Statement of Truth
In direct response to the increase in fictitious citations, the Upper Tribunal has amended the UTIAC1 claim form. Legal representatives must now confirm by statement of truth that every authority cited in the form or accompanying documents (a) exists, (b) may be located using the citation provided, and (c) supports the proposition of law for which it is cited. Other forms and directions are to be similarly amended.
A legal representative who signs such a statement in a case containing false authorities “should ordinarily expect to be referred to their regulatory body.” This is a significant shift. Previously, filing false citations was a matter of professional misconduct that the tribunal might investigate. Now, it is a matter of verified personal attestation — closer in character to a false statement of truth, with all the regulatory consequences that follow.
The Regulatory Landscape
The SRA’s thematic review of AI in legal services, published in December 2025, found that only 1 out of 36 Compliance Officers for Legal Practice could accurately describe their firm’s obligations when using AI tools. The review highlighted that most firms had no formal AI usage policy, and that many COLPs were unaware of the data protection and confidentiality implications of submitting client material to external AI platforms.
This is a concerning gap in regulatory understanding, particularly when placed alongside the Tribunal’s findings in Munir. The ruling makes clear that ignorance is not a defence. A COLP who cannot describe their firm’s AI obligations is a COLP who is failing in their supervisory duties. The SRA’s Risk Outlook report has identified AI as a priority area for regulatory attention, and Munir gives the SRA a clear precedent for enforcement action.
The IAA, which regulates immigration advisers, has not yet issued specific guidance on AI. The Tribunal noted this gap and suggested the organisation “may wish to consider whether it may be prudent to do so.”
International Context
The UK is not alone in grappling with these questions. In the United States, Mata v Avianca, No. 22-cv-1461 (S.D.N.Y. 2023, Judge Castel), was the first widely reported case in which attorneys submitted ChatGPT-generated fictitious citations. They were each fined $5,000. More recently, in United States v Heppner, No. 25-cr-00503 (S.D.N.Y., February 2026, Judge Rakoff), a federal court held that communications with the AI assistant Claude could not attract attorney-client privilege. Judge Rakoff’s primary ground was that Claude is not an attorney, so the first element of privilege — a communication between client and lawyer — was not satisfied. As an additional ground, the court held that submitting documents to a public AI platform constituted voluntary disclosure to a third party, waiving any privilege that might otherwise have attached.
The analytical distinction between Heppner and Munir is worth noting. In Heppner, the US court’s primary analysis was that privilege never properly arose in the first place. In Munir, the Tribunal’s framework is different: privilege existed in the client communications, but was lost through disclosure to a public platform. These are different routes to the same practical outcome, but they matter for firms assessing risk across jurisdictions — the UK framework turns on what happens to data after it enters the AI tool, while the US framework questions whether the communication was privileged to begin with.
In England, the Divisional Court’s decision in R (Ayinde) v London Borough of Haringey [2025] EWHC 1383 (Admin) — which the Munir panel cited extensively — set out the framework for dealing with AI-generated false citations, including the principle that referral to a regulator is “likely to be appropriate” where proper checks have not been carried out. Munir extends that framework by adding the privilege dimension: it is no longer just about hallucinated citations, but about what happens to confidential data when it enters a public AI system.
The emerging international consensus is clear: public AI platforms and privileged material do not mix. The question is no longer whether courts will act, but how quickly regulatory frameworks will catch up with the pace of adoption.
Practical Implications for Firms
The Munir decision creates immediate obligations for every regulated legal professional and firm. The Tribunal was careful to state, at paragraph 18, that it does “not suggest for a moment that the use of legal AI programmes by properly trained professionals is anything other than a step forward in legal practice.” The issue is not AI itself — it is which AI, used how, and with what safeguards.
Firms should consider the following steps as a minimum:
Audit your AI tools. Every platform used by any fee-earner must be classified as public (data leaves the firm’s control and is processed on external servers) or enterprise (data stays within a controlled environment). Public AI tools — including free-tier ChatGPT, Google Gemini, and any platform that processes inputs on external servers — must not be used with any client-confidential material. The distinction is architectural, not commercial: a paid subscription to ChatGPT does not change the underlying data processing model.
Implement a verification protocol. Every citation in every document filed with a court or tribunal must be independently verified against an official source. The National Archives case law database, BAILII, and specialist legal research platforms provide verifiable citations. No citation should be filed that has not been confirmed to exist, to carry the stated citation, and to support the proposition for which it is cited.
Train all staff. The Tribunal made clear that COLPs must ensure that fee-earners are aware of the dangers of using non-specialist AI for legal research and drafting. Training must cover the distinction between specialist legal AI tools and general-purpose large language models, the privilege implications of uploading client material to external platforms, and the verification steps required before any AI-assisted work is filed.
Update supervision procedures. The principle that supervisors bear greater culpability than the individuals they supervise means that delegation of research or drafting to junior staff must be accompanied by documented review. A supervisor who signs a statement of truth without checking citations is at greater regulatory risk than the person who drafted the document.
Choose the right research tools. The Tribunal explicitly endorsed the use of enterprise AI tools that do not place data in the public domain. Specialist legal research platforms that search official databases and return verifiable citations operate within the framework the Tribunal approved. The key characteristics to look for are: data processed within a controlled environment, results drawn from verified legal sources, and citations that can be independently checked. Platforms designed specifically for legal research — including tools like Search the Law, which searches 15 official UK legal databases using closed-source AI models and never requires users to upload client documents — represent the approach the Tribunal distinguished from the practices it condemned.
What the Tribunal Got Right
The Munir decision is notable for what it did not do. It did not ban AI. It did not suggest that legal technology is inherently dangerous. It drew a careful, architecturally informed distinction between tools that place data in the public domain and tools that do not. It placed the responsibility where it belongs — on the qualified professional with conduct of the matter, not on the technology.
The Tribunal also resisted the temptation to treat the two cases identically. Mr Mohammed, who self-reported promptly and cooperated fully, was not referred to any regulator. Mr Rasheed, whose evidence was evasive and whose supervisory failures were systemic, was referred to the SRA. The graduated response sends a clear message: the profession will be held to standards, but honesty and accountability will be recognised.
A note on authority: Munir is an Upper Tribunal decision. The Upper Tribunal is a superior court of record and its decisions are highly persuasive, but it does not formally bind the High Court or above. That said, the decision builds directly on the Divisional Court’s reasoning in Ayinde, and no higher court has expressed a contrary view on any of these principles. For practical purposes, any firm that ignores this guidance does so at considerable regulatory risk. For practitioners, the practical question is straightforward: if you cannot explain to a tribunal exactly how your AI tools handle client data, you are not ready to use them.
Explore the authorities discussed in this article — with citation network analysis showing how each decision has been treated by subsequent courts.
Researching the Authorities
The case law on AI and professional conduct is developing rapidly. Key search terms for monitoring this area include:
For AI hallucinations and false citations: “Munir v SSHD [2026] UKUT 81”, “Ayinde v Haringey [2025] EWHC 1383”, “Hamid jurisdiction AI”, “fictitious citations tribunal”, “AI hallucination legal research.”
For privilege and AI: “legal privilege AI waiver”, “open source AI confidentiality”, “ChatGPT legal professional privilege”, “closed source AI legal practice”, “client confidentiality artificial intelligence.”
For supervision obligations: “COLP AI supervision”, “SRA AI obligations solicitors”, “supervisor culpability junior fee-earner”, “delegation legal research AI checks.”
For regulatory developments: “SRA thematic review AI 2025”, “SRA Risk Outlook artificial intelligence”, “IAA AI guidance immigration”, “statement of truth cited authorities UTIAC1.”
Search the Law indexes decisions from the Upper Tribunal, High Court, Court of Appeal, and Supreme Court alongside 15 other official databases, with citation network analysis showing how subsequent decisions have treated each authority. As the regulatory response to AI in legal practice accelerates through 2026, tracking how courts apply the principles established in Munir and Ayinde will be essential for every firm’s compliance framework.
Munir v Secretary of State for the Home Department
[2026] UKUT 81 (IAC) — Upper Tribunal (Immigration and Asylum Chamber)
Read the Full JudgmentFull text with citation network analysis — see how this decision has been cited
Search the Law is not a law firm and does not provide legal advice. The information in this article is for legal research purposes only. If you need advice about professional conduct or regulatory obligations, contact the Solicitors Regulation Authority (0370 606 2555), the Immigration Advice Authority, or the Law Society.