Explore insights on e-signatures and informed consent | i agree

Why firms must control the summary of their agreements in the AI era

Written by Chris Fortune | Mar 3, 2026 10:33:33 AM

This article is for SRA-regulated law firms and is written for general information only. It is not legal advice.

There is an uncomfortable shift happening in client behaviour: the full document is no longer the main interface your client uses to understand your engagement letter, funding terms, settlement agreement, or commercial contract. The summary is increasingly becoming the decision-making layer.

That matters because disputes and complaints rarely turn on what a client “skimmed”. They turn on whether the firm can show the client understood key obligations, risks, cost exposure, and scope limitations at the point of commitment. The moment a client outsources understanding to a third-party AI summary (often produced by a tool you do not control, on terms you did not choose), you inherit a new category of regulatory, evidential, and reputational risk.

What this blog contains

SRA transparency rules and client understanding

The regulatory direction of travel is clear: clients should be able to make informed choices based on information that is accurate, relevant and understandable. That is the stated purpose of the SRA Transparency Rules: to help members of the public and small businesses make informed choices by ensuring they have “accurate and relevant information” when considering purchasing legal services. 

At the same time, the SRA Codes go beyond “publish the information” and into “ensure the client can use it”. The Code of Conduct for Solicitors includes an express duty to give clients information in a way they can understand, and to ensure they are in a position to make informed decisions about the services they need, how the matter will be handled, and the options available. It also requires firms to provide the best possible information about pricing and likely overall cost at engagement and as the matter progresses. 

These obligations do not disappear because a client chose to read (or not read) the engagement letter. And they become more difficult to evidence if the client’s “actual” understanding was shaped by an AI summary the firm never saw.

Regulatory scrutiny on transparency is also not theoretical. The SRA reports it has issued large volumes of warnings and fixed-penalty fines to firms in relation to transparency rules as part of proactive checks. 

The practical reframing: transparency is no longer just about what is on your website or in your engagement letter. It is also about what a client thinks the letter says after running it through an AI summary layer.

Can clients rely on AI to review engagement letters

Whether clients should rely on AI is almost beside the point. Many already do, because generative AI is now embedded into everyday information-seeking behaviours in the UK.

A UK government survey on AI skills found that 73% of UK adults had used AI at least once in the past month, and 35% had used generative AI chatbots in the past month. The same research found confidence is low, particularly around keeping information safe and private, and judging the accuracy of AI outputs.

At the platform level, summaries are being normalised as the default interface. Ofcom reports that around 30% of searches now show AI overviews and that more than half of adults say they see these summaries often. In the same report, Ofcom notes that ChatGPT had 1.8 billion UK visits in the first eight months of 2025 (up from 368 million in the same period of 2024). 

And “summarising” is not a niche use-case. The Digital Regulation Cooperation Forum’s consumer research lists common GenAI activities that include summarising research, drafting or shortening text (for example emails/documents), clarifying/summarising search results, and summarising actions from meetings. 

Among younger cohorts, the pattern is even more explicit.  YouGov  reported in a UK student survey that, among students who use AI for study, 69% use it for summarising sources. 

So what happens when a client does the same thing with a 20–30 page engagement letter?

  • “Summarise this in plain English.”

  • “What are the risks?”

  • “Is anything unusual?”

  • “Can the solicitor terminate? What will I still owe?”

If the AI misses (or downplays) a limitation of scope, a liability cap, a termination cost mechanism, a success fee model, or a key assumption in a cost estimate, the client’s “understanding” is no longer anchored to your drafting. It is anchored to an uncontrolled summary that may be inaccurate, incomplete, or misleading

This is not hypothetical hand-wringing about AI errors. The Courts and Tribunals Judiciary’s AI guidance explicitly warns that public AI chatbots can produce inaccurate, incomplete, misleading or biased information, and that “wrong” answers are not infrequent. It also highlights hallucinations, including made-up citations. 

And the judiciary has already had to address real-world consequences of unchecked AI outputs entering legal processes. In Ayinde and Al-Haroun, the Divisional Court addressed fabricated and incorrect legal materials being put before the court, underscoring professional responsibility to verify AI-assisted content.

For law firms, the key insight is this: even if your drafting is impeccable, your client may be making their decision based on a summary layer you neither authored nor approved.

AI confidentiality risks for solicitors

There is a second-order risk that firms often miss when discussing “clients using AI”: data leakage.

The SRA’s confidentiality guidance reiterates the duty to keep clients’ affairs confidential unless disclosure is required/permitted by law or the client consents, and it emphasises that the duty is unqualified in nature (a duty to keep information confidential, not merely to take reasonable steps). It also stresses the distinction between confidentiality and legal professional privilege. 

Now overlay a client behaviour that is becoming normal: copying a draft engagement letter, settlement agreement, or commercial contract into a publicly available AI chatbot to obtain a “plain English summary”. That document may contain:

  • confidential facts about the client’s matter;

  • personal data (names, emails, pricing information, health or employment context, etc.);

  • third-party information (opponents, counterparties, witnesses);

  • your firm’s proprietary drafting (precedents, risk allocation language, negotiation structure).

The Courts and Tribunals Judiciary’s AI guidance is blunt: do not enter private or confidential information into a public AI chatbot, and treat anything input as “published to all the world”. 

That is judicial guidance aimed at judges, but the underlying risk is the same for clients and solicitors: public-facing AI tools are not designed to give you the assurance that the content is contained, not retained, and not repurposed.

It is also not safe to assume “it won’t be used for training”. Different consumer AI tools take different approaches, and policies change over time:

  • OpenAI states in its terms that it may use user “Content” to develop and improve its services, and provides an opt-out mechanism if a user does not want content used to train models. 

  • Google warns in its Gemini Apps privacy information not to enter confidential information a user wouldn’t want a reviewer to see or Google to use to improve services, and states that a subset of chats can be reviewed by human reviewers to improve products. 

  • Anthropic announced updates to its consumer terms and privacy policy giving users choices about whether chats can be used for model training, alongside extended retention periods for those who allow training. 

Even where enterprise offerings include stronger protections, most clients are not using enterprise environments. They are using consumer-grade “free” or standard accounts, often without reading data controls (ironically mirroring how they treat legal terms).

Data protection and governance amplifies the issue. The UK regulator for data protection, the Information Commissioner's Office, maintains detailed guidance on applying UK GDPR principles to AI. It also notes its AI guidance is under review following the Data (Use and Access) Act coming into force in June 2025—an indicator of a rapidly evolving compliance landscape. 

Net result: if clients paste your documents into public AI tools, you can end up with (a) a confidentiality problem, (b) a data protection problem, and (c) an IP leakage problem—without the firm ever knowing it happened.

Engagement letter disputes and enforceability

Even before the AI era, the profession had a persistent problem: many clients do not read (or do not truly process) terms of engagement. That is not a moral failing; it is human behaviour at scale.

 Deloitte reported in its UK Digital Consumer Trends research that consumers generally do not read terms and conditions. In 2020, 81% of respondents admitted to “sometimes”, “almost always” or “always” accepting terms and conditions without reading them, and only 7% said they never accept without reading. 

AI does not automatically change that behaviour into careful reading. It often turns it into “AI-assisted decision-making”: the client still does not read the underlying document, but now feels they have. And that is precisely where disputes incubate.

Cost disputes are a particularly exposed category. The Legal Ombudsman has said that around one in ten complaints referred to it centre on the amount consumers have been asked to pay, and that costs issues also feature in many more complaints where communication about costs is a factor. It emphasises keeping clear and accurate records of cost information, including confirmation that the client understands what they will be charged. 

Its separate guidance on costs complaints states that, when handling a complaint, it will check whether the provider ensured that the consumer fully understood what they would or might have to pay right from the start

Recent case law illustrates how courts scrutinise the adequacy of information given to clients. In Belsner, the High Court (Costs) judgment discusses questions of client understanding and cost information, and—importantly for firms—records the court’s view that the solicitors did not comply with the SRA Code because they neither ensured the client received the best possible information about likely overall cost nor ensured the client was in a position to make an informed decision. 

The Supreme Court has also reinforced (in a different context) that informed agreement can matter to outcomes. In Oakwood v Menzies, the Supreme Court identified the client’s case that “payment” for the purposes of section 70 of the Solicitors Act 1974 requires the client to have been informed of and to have agreed to the amount to be paid in respect of the bill, and it contrasts this with a “general” prior agreement to deductions. 

None of this means that a client can routinely escape contractual commitments by saying “an AI summary didn’t mention it”. But it does mean that AI summaries can become the narrative battleground in:

  • complaints (internal or to the Legal Ombudsman);

  • arguments about whether the client was treated fairly on costs and scope;

  • regulatory scrutiny about the firm’s transparency, communications, and record-keeping;

  • reputational harm, even if the firm is ultimately “right” on the legal merits.

If your engagement letter contains (for example) a narrow scope of work, exclusions, strict termination charges, or a liability cap, an AI summary that calls the terms “standard” can fuel exactly the kind of hindsight surprise that complaints bodies repeatedly warn against. 

How to evidence informed client decisions

The strategic response is not “ban AI”. It is to accept the behavioural reality and change the deliverable.

If clients will seek a summary anyway, the safer posture is to provide an approved summary that you can stand behind—paired with an evidential trail showing what the client was told, what was emphasised, and what they acknowledged.

This aligns with the SRA Code’s requirement to provide information in a way clients can understand and to ensure the client is in a position to make informed decisions. 

It also aligns with the Legal Ombudsman’s repeated emphasis on avoiding surprise and keeping records of the cost information provided (including confirmation that the client understood). 

A practical “summary control” framework for law firms

Build a one-page “Key Terms in Plain English” alongside the engagement letter. Make it part of your standard client inception pack, not a marketing document. Keep it structured and consistent:

  • Scope: what you are doing, what you are not doing, and what would require a new instruction.

  • Costs: charging model, estimate range (and what can move it), billing points, and disbursements/VAT.

  • Termination: who can terminate, when, and what the client may still owe.

  • Liability: headline limitations and any carve-outs (in client-facing language).

  • Client responsibilities: documents/information needed, deadlines, decision points.

  • Complaints: how to raise concerns and when escalation routes apply.

Use “prominence by design” for clauses that drive complaints. The point is not to scare clients; it is to prevent later surprise. Costs and scope limitations are repeat complaint drivers, so treat these as “must-understand” elements rather than boilerplate. 

Add an AI-safe client warning (and a safer alternative). You cannot control what clients do, but you can reasonably warn them that pasting documents into public AI tools may expose confidential information and may be retained or reviewed depending on the provider. Judicial guidance treats disclosure into public AI tools as effectively public. Provider policies also commonly warn against entering confidential information. 

Consider including a short statement such as: “If you would like a plain-English explanation, ask us—we can provide an approved summary. Please do not upload this document to public AI tools.” This is partly about confidentiality, but it is also about reducing the incentive for clients to outsource understanding to an uncontrolled summary layer. 

Capture explicit acknowledgements to the “decision layer”. The Legal Ombudsman explicitly calls out the importance of keeping clear records, including confirmation that the client understands what they will be charged. Build on that principle with a lightweight checklist acknowledgement (digital is fine) that the client has understood the specific points that commonly generate disputes. 

Make your evidence defensible in a dispute. If a complaint later alleges “I relied on an AI summary”, your file should still show:

  • what summary you provided (versioned and dated);

  • what risks you highlighted (and how you made them prominent);

  • what the client acknowledged (e.g., scope/cost/liability/termination);

  • what questions they asked and what you answered (attendance notes).

This is not about turning client onboarding into a compliance theatre. It is about making it possible to evidence that the client really was put in a position to make an informed decision, as the SRA Code requires. 

Where appropriate, use better wrappers than “signature = understanding”. There is growing recognition (including in judicial guidance) that summaries can be useful but must be checked and handled responsibly, and that many users will not be skilled at verification when relying on AI tools. 

Some firms are now experimenting with richer ways to capture understanding (structured summaries, layered explanations, and recorded confirmations) especially where risk of later dispute is high (e.g., CFAs/DBAs, high-volume consumer claims, complex retainers, or arrangements involving deductions from damages). The goal is to reduce ambiguity, reduce surprise, and improve the defensibility of the firm’s client-care narrative.

Subtle point: tools that are designed to attach an approved explanation to the contract and keep an audit trail (for example, i agree) fit naturally into this direction of travel—because they focus on the “decision layer” rather than assuming the PDF did all the work.

Closing question

If a client pasted your latest engagement letter into an AI tool today:

  • Would you be confident in the output?

  • Would it reflect your scope, cost and liability allocation accurately?

  • Would you be comfortable defending that output in a complaint or dispute?

In the AI era, drafting the contract is no longer enough. Firms that control the summary will be better placed than firms that leave interpretation to uncontrolled AI.

Internal links

External links and research