The courts and tribunals of England and Wales continue to grapple with the potential risks of using AI in the preparation of court documents. Several recent decisions focus on the risk of relying on fictitious authorities generated by AI tools, which is an important concern within common law jurisdictions that depend heavily on case precedents.
The growing case law in this jurisdiction demonstrates how seriously the courts take false or misleading material in court submissions, and their willingness to impose tangible sanctions on legal professionals who proffer such material. The courts do not consider incidents of unverified use of AI materials as mere mistakes. Legal professionals in England and Wales may be exposed to a wide range of potential penalties, including criminal liability for perverting the course of justice or being held in contempt of court. They also may be subject to regulatory sanctions.
The use of AI by litigants in person (LiPs) raises different considerations, particularly where the litigant cannot be expected to know whether authorities cited by AI are false or to know how to locate or check the underlying case law.
Examples of Recent Decisions
R. (on the application of Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin) has quickly become the leading decision in this area. This decision illustrates the risks of freely available generative AI tools creating false authorities, and therefore the need to verify any AI-generated material against authoritative sources. In this case, counsel included fake case citations in written materials. The court made a wasted costs order and referrals to relevant regulatory bodies for the lawyers involved. The judgment now serves as valuable guidance on the use of AI and the potentially severe sanctions that may be imposed as a result of false material being placed before the court.
The more recent decision in Elden v Revenue and Customs Commissioners [2026] UKFTT 41 (TC) also addressed inaccuracies in court documents because of the use of AI, specifically in relation to the production of case summaries included in the appellant’s skeleton argument. In Elden, the court imposed specific requirements in relation to the skeleton argument to be submitted for the future substantive hearing. These requirements included the need to annex full judgments of any cases referenced, use direct quotations from the judgments, and include in the statement of truth that each statement of fact, case summary, or reference has been checked. It is possible to envisage that these specific requirements may be adopted more broadly should such issues persist.
Other specific challenges may arise where LiPs use AI. For example, in Zzaman v The Commissioners for His Majesty’s Revenue and Customs [2025] UKFTT 00539 (TC), a LiP had prepared his written statement of case with AI assistance. In this case, the issue was not citation of fictitious authorities. Instead, the LiP relied on inaccurate case citations and failed to provide authority for the propositions advanced. The tribunal provided suggestions for LiPs to mitigate the risks of using AI in preparing court documents, including: (i) instructing the AI tool not to answer the question if it is uncertain of the answer; (ii) asking the AI tool to cite specific paragraphs of authorities so those can be checked; and (iii) asking the AI tool to provide information about any shortcomings in the case being put forward.
The generation of fictitious authorities or inaccurate citations represent only a few of the issues arising from the use of AI in litigation. Other issues around privilege have also become more prevalent including, for example, the recent U.S. decision in United States of America v Bradley Heppner, Case 1:25-cr-00503-JSR. See the eDiscovery Watch blog post here. This case concerned whether counsel could assert attorney-client privilege over a party’s communications with a publicly available AI platform in connection with a pending criminal investigation. In light of the well-known lack of confidentiality of the freely available AI platform, the court held that privilege did not attach.
The CJC Consultation
In February 2026, the Civil Justice Council (CJC) launched a consultation paper considering potential requirements to govern legal representatives’ use of AI to prepare court documents.
More specifically, the consultation considers whether AI-use declarations should apply to specific categories of litigation documents. The CJC’s proposals include:
a) Statements of case, skeleton arguments, and other advocacy documents: whilst a requirement could be imposed requiring a specific declaration stating whether AI was used to prepare specific documents, the CJC noted that it would favour no new rules for these types of documents, provided that they bear the name of the legal representative taking professional responsibility for them.
b) Disclosure: the CJC noted that there does not appear to be a pressing need to introduce a requirement for disclosure lists/statements to include a section addressing the extent to which AI tools/software have been used.
c) Witness statements: the CJC’s proposals differ according to the nature of the witness statement. For trial witness statements under PD57AC, for example, the CJC proposes that a declaration should be included confirming that AI wasn't used to generate any of the content.
d) Expert reports: the CJC proposes including a confirmation in the statement of truth that the report identifies and explains any use of AI, other than for administrative purposes.
Outlook
The CJC’s proposals will remain open for consultation until 14 April 2026, so the final scope of any new requirements remains to be seen. Still, appropriate oversight and verification in relation to the use of AI in litigation remain important considerations, and, as demonstrated by recent decisions, the potential sanctions for improper AI usage may be severe.
The courts of England and Wales are not alone in considering the need for new rules around the use of AI in preparing court documents. Other jurisdictions, including a number of arbitral bodies in England and Wales and beyond, are considering new requirements and dealing with similar issues. Even subtle variations in approach across jurisdictions may give rise to differing outcomes. Going forward, a party’s choice of jurisdiction may be influenced by how AI may (or may not) be used before the relevant court or tribunal.