Guidance on AI confidentiality and a critique plus more on adjournments
A look at new guidance on confidentiality and AI from the United Kingdom and yet another case on adjournment requests
Getting attention in the online world can be a challenge. Subscriptions to this newsletter have been growing more slowly as of late and there are now just over 250 of you (with an “open” rate hovering around 50%). Please consider sharing this newsletter with your colleagues to spread the word.
I am starting to use Bluesky more regularly - and not just for highlighting this newsletter. If you are on Bluesky, you might want to consider following me there: @adjudicatethis.bsky.social.
The Clawbies (Canadian Law Blog Awards) have been recognizing Canadian law blogs for 20 years. They have now expanded their reach, celebrating free online Canadian legal content: blogs, podcasts, videos, social accounts, legal newsletters, platform commentary, CanLII Connects, white papers, and more. There are only two rules:
#1 “The Humble Canadian”: Please don’t nominate your own publication or project for a Clawbie. The only reliable way to bring your work to our attention is to nominate other candidates — follow this rule, and we’ll take a look at your work too!
#2 “Three and Free”: Nominate up to three digital publications or authors (remember to use the #clawbies2025 hashtag) that are freely available, at no monetary cost. Include a brief explanation of why you think those publications deserve an award.
Nominations will be accepted until the end of day on Monday, December 15th, 2025.
To nominate, all you need to do is list up to three digital publications using the hashtag #clawbies2025 on either Bluesky or LinkedIn.
Winners will be announced on New Year’s Eve.
More guidance on use of AI in the courts
The UK judiciary has released new guidance for the judiciary on the use of AI. It provides the following guidance on confidentiality and public AI bots:
Do not enter any information into a public Al chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public Al chatbot should be seen as being published to all the world.
The current publicly available Al chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known.
You should disable the chat history in public Al chatbots if this option is available, as it should prevent your data from being used to train the chatbot and after 30 days, the conversations will be permanently deleted. This option is currently available in ChatGPT and Google Gemini but not in some other chatbots. Even with history turned off, though, it should be assumed that data entered is being disclosed.
Be aware that some Al platforms, particularly if used as an App on a smartphone, may request various permissions which give them access to information on your device. In those circumstances you should refuse all such permissions.
In the event of unintentional disclosure of confidential or private information you should contact your leadership judge and the Judicial Office. If the disclosed information includes personal data, the disclosure should be reported as a data incident. …
You should treat all public Al tools as being capable of making public anything entered into them. [emphasis added]
I asked ChatGPT for a critique of the guidance on confidentiality - straight from the horse’s mouth, so to speak. I’ve lightly edited the response.
✅ What the guidance does well
Clear prohibition of confidential inputs to public AI tools — The guidance explicitly warns judicial office holders that they must not enter “private or confidential information … into a public AI chatbot,” because anything input should be “seen as being published to all the world.” (Courts and Tribunals Judiciary) This is a strong, simple principle: privacy and confidentiality are treated as non-negotiable.
Promotion of a secure, private alternative — By making an internal tool, Microsoft Copilot Chat (via the eJudiciary platform), available for judicial office holders, the guidance recognises institutional data-security needs and gives a safer path for any AI-assisted work. (Courts and Tribunals Judiciary)
Risk-awareness and accountability — The document acknowledges that publicly available “chatbots” may store or reuse inputs, that AI outputs can be “inaccurate, incomplete, or misleading” and that any use of AI must be followed by human verification. (ein.org.uk) It also makes clear: anything issued under a judge’s name remains their responsibility. (Courts and Tribunals Judiciary)
In short: the guidance recognises both the convenience and the hazards of AI — and draws a clear line around confidentiality and human responsibility.
⚠️ Where the guidance is weak, vague or under-inclusive
But the “confidentiality” protections come with a number of shortcomings or open questions. These leave potential gaps and risks.
• Over-reliance on user discipline and discretion
The rule “do not put confidential stuff into public bots” depends entirely on individuals always remembering and adhering to it. There is no technical enforcement or automated barrier (e.g., blocking pasting of certain document types or restricting file uploads) for public tools. The guidance essentially relies on trust and good discipline.
There’s also no detailed process described for verifying that users are abiding by the rule — no audits, logging, or oversight mechanisms. It is not obvious how compliance will be monitored or what consequences would follow from a breach (beyond the generic “report a data incident”).
Given how easy it is to make a mistake (or misunderstand what counts as “confidential”), this seems fragile.
• Narrow focus — only “public chatbots” covered; ambiguity for other AI tools
The guidance warns against “public AI chatbots.” (Courts and Tribunals Judiciary) But what counts as “public chatbot” is left vague. Would a private, but third-party, AI-powered tool — perhaps offered by a commercial legal-tech vendor — count? If a court uses third-party document-review or summarisation tools (cloud-based but subscription or enterprise-grade), do the same confidentiality prohibitions apply? The guidance doesn’t clearly address this.
Relatedly: while the private Copilot Chat tool is sanctioned, it’s unclear whether there is any technical guarantee that data submitted there is completely safe against leaks, oversight, or misuse. The guidance claims that data is “secure and private” when logged into eJudiciary. (Courts and Tribunals Judiciary) But there is no public audit, transparency on data retention/deletion policies, or independent verification described — meaning confidentiality still rests, in part, on trust.
• Insufficient recognition of subtle confidentiality risks — metadata, context, derived data
The guidance seems to treat confidentiality as just “don’t paste private text.” But confidentiality risk is broader. Even seemingly innocuous inputs (client names, anonymised facts, procedural history) may combine in ways that de-anonymise or reveal sensitive data. The policy doesn’t seem to account for this kind of “inference risk.”
Similarly, using AI tools to summarise or process documents could expose data about case contents, legal strategy, or sensitive personal data — even without pasting full documents. The guidance doesn’t provide detailed safeguards or risk analysis for these subtler risks.
• Lack of transparency, auditability, and ongoing oversight
There’s no mechanism for auditing or logging what AI tools are used, what data is input, or what outputs are generated. That makes it difficult to investigate breaches, near-misses, or patterns of misuse.
Without oversight, there is a risk that misuse — intentional or accidental — could go undetected, undermining confidentiality in a latent way.
The guidance is largely principles-based, not prescriptive; which gives flexibility, but also leaves a lot of room for interpretation and human error. One legal commentator cited in reporting on the guidance noted there was “no accountability mechanism.” (The Independent)
🔎 Deeper structural problems and what the guidance fails to address
Beyond the weaknesses in drafting and scope, there are systemic issues that the guidance doesn’t fully grapple with — but which, given the context, may have serious consequences.
“Black-box” nature of AI systems — Many AI tools (even enterprise ones) operate opaquely: they take in data, transform it internally, and return outputs, with no visibility into how the data is processed, what is retained, how long it’s stored, or how it’s used for training. The guidance doesn’t require or encourage explainability or transparency from AI providers. That means confidentiality relies on trusting the vendor or service provider — a fragile foundation especially when commercial incentives exist. This echoes concerns raised more broadly in legal-AI scholarship about the “black box problem” undermining accountability. (arXiv)
Lack of holistic data governance framework — Confidentiality isn’t just about what gets typed into a chatbot. It also involves document storage, access controls, auditing, retention/deletion policy, personnel training, logging, incident response, and data-handling protocols. The guidance touches only superficially on a subset of these, failing to outline a comprehensive data governance policy. In an environment as sensitive as a court, that seems inadequate.
Potential chilling effect on transparency and access to justice — This may seem counter-intuitive, but strict confidentiality rules might discourage the use of AI tools even in benign/beneficial ways (e.g., automated redaction, drafting non-substantive documents, summarising public documents) if the rules are unclear. That could slow adoption of useful innovations — or push people toward shadow/unregulated tools.
💡 What a stronger/confidentiality-robust policy would look like
If the goal is to properly safeguard confidentiality while reaping the benefits of AI, a robust framework would likely need:
Technical safeguards — For example, restricting AI tools to vetted enterprise-grade systems; enforcing end-to-end encryption; prohibiting uploads of sensitive files to third-party servers; segregating data storage; automatic deletion of inputs after processing; and clear access controls.
Transparency & auditability — Logging of AI-tool usage (who used it, when, what was input, what came out), periodic audits, data-breach incident procedures, and independent oversight.
Governance & policy mechanisms — Clear definitions of what counts as “confidential”, regular training for staff/judicial office holders, mandatory review of AI-generated content, and enforcement mechanisms (consequences for misuse or negligence).
Explainability and vendor accountability — Requiring AI providers to document how data is handled, how models are trained or fine-tuned, retention policies, and to provide guarantees that user inputs won’t be used to retrain public models.
Context-sensitive risk assessment — Rather than a blanket ban on “public chatbots,” a formal risk-based assessment: some uses might be low-risk (e.g. summarising public domain material), others high-risk (personal data, case strategy).
🎯 Conclusion: Useful first step — but far from sufficient
The confidentiality component in the updated guidance is necessary and important; it signals that the judiciary recognises the serious risks that generative AI tools pose to privacy and the integrity of proceedings. However, as currently drafted, it amounts to a “good-practice exhortation” rather than a robust, enforceable security framework.
Given the sensitivity of judicial work — legal arguments, personal data, litigant confidentiality, public trust — relying mainly on discretion and good behaviour seems inadequate. Without stronger technical and governance safeguards, the guidance risks being a paper shield against real-world risks.
When a denial of an adjournment request is fair
Adjournment requests can be tricky to deal with. A decision-maker is faced with balancing the fairness to the requester against the prejudice to the other party, all while considering the impact on the tribunal’s caseload.
The recent case of Kim v. Leung, 2025 ONSC 6585 (CanLII) provides a good example of the conditions under which a denial of an adjournment request is appropriate. This was an eviction case before the Landlord Tenant Board. The Board denied a second adjournment request from the tenant. The court found no basis to interfere with this exercise of discretion.
At her first hearing date the tenant requested an adjournment to obtain legal advice and to attend a medical appointment. The request was granted and the hearing rescheduled for almost four months later.
At the second hearing date, the tenant requested another adjournment, again stating that she needed to obtain legal representation. She told the Board member that a lawyer friend was supposed to represent her but could not attend that day. She also said that her lawyer friend did not respond to her “whatsoever”.
The Board member asked the tenant about her efforts to obtain legal representation over the previous four months and determined that she had not made sufficient efforts. The tenant was also able to obtain advice from duty counsel on the day of the hearing.
The court noted that many tenants represent themselves and found that there is “no unyielding right to legal representation at the Board”.
Lessons for tribunals
Obviously, each request for an adjournment should be assessed on its own merits. However, there are a few good practices to follow when considering an adjournment for a party to obtain legal representation:


