The FCA published its Consumer Duty review on consumer understanding in March 2026, and it deserves more attention than it has received. This isn't a routine update. It's a detailed look at how regulated firms are actually performing against the consumer understanding outcome — and the findings are uncomfortable reading for most of the sector.
The central problem the FCA identifies is one that runs through financial services like a fault line: firms are measuring the wrong things. Complaint volumes, sales data, absence of queries — these are being treated as evidence that customers understood what they agreed to. The FCA is explicitly saying they are not. You can have low complaints and widespread confusion at the same time. The two are not the same thing and never have been.
What this review does, more clearly than anything the FCA has published previously on Consumer Duty, is set out what evidence of understanding actually looks like. It gives examples of good practice, names the failure modes it keeps finding, and draws a direct line between communication design, testing, governance, and the outcomes firms are supposed to be delivering. Reading it properly, it becomes clear that the gap between what most firms are doing and what the Duty actually requires is wider than most compliance teams would be comfortable admitting.
This blog walks through the key findings and explains what genuinely meeting the consumer understanding outcome requires in practice — and how
i agree addresses each one directly.
What this blog contains:
- The core problem: firms are still measuring the wrong things
- Testing whether customers actually understand: what the FCA expects
- Communication design: beyond shortening text
- Vulnerability and accessibility: proactive, not reactive
- Governance and oversight: clear ownership of the understanding outcome
- What good looks like in practice
- Conclusion
- References
The Core Problem: Firms Are Still Measuring the Wrong Things
The FCA's review surveyed 38 firms across insurance, retail banking, payments, consumer finance, and CFD providers. Across almost every area it looked at, the same weakness kept appearing: firms were collecting data but not using it to assess whether customers actually understood anything.
Drop-off rates, complaint volumes, sales conversion — these are activity metrics. They tell you what customers did. They don't tell you what customers understood. The FCA found that several firms continued to rely on sales data or the absence of complaints as evidence of understanding, and it makes clear this does not provide reliable assurance.
This matters because confusion rarely announces itself as a complaint. It shows up as poor decisions, unexpected cancellations, fee disputes that surface months after the agreement was signed, and customers who agreed to something they didn't properly understand and are now unhappy about it. By the time any of that generates a formal complaint, the harm has already happened. Measuring complaint volumes tells you the damage. It doesn't tell you the cause, and it certainly doesn't tell you the current state of customer understanding across your book.
The FCA's own Financial Lives Survey data sits behind this finding: 6.3 million adults in the UK have limited understanding of the financial products they hold. 10.3 million have low confidence with everyday numeracy. These are not people who are complaining. They are people who are quietly unclear about what they agreed to. If your compliance evidence is built around complaints data, you are not seeing most of the problem.
Testing Whether Customers Actually Understand: What the FCA Expects
The section of the review on management information and testing is where the FCA is most specific about what it expects — and where the gap between good and poor practice is most stark.
Good practice, according to the review, means testing communications both before and after changes. It means using comprehension checks, A/B testing, surveys, and customer callbacks to verify whether changes actually improved understanding. It means documenting what changed, why it changed, and what the measured impact was. One firm the FCA highlighted tested all new product communications with customers including those with characteristics of vulnerability, set an internal target of at least 80% correct recall of key points, and only signed off changes that met that threshold.
Poor practice — which the review found in a significant number of firms — looked like this: saying you had tested communications without being able to show any evidence. Making changes and not checking whether they worked. Collecting MI that couldn't be connected to any decision. Relying on the absence of complaints as the proxy for understanding.
The FCA also found a failure mode that is worth highlighting specifically: firms making cosmetic changes to communications — shorter text, new colour schemes, different icons — without testing whether any of it improved comprehension. The format changed. Whether customers understood the content better was never checked. These changes create the appearance of improvement while potentially leaving the underlying problem untouched.
A proper consent audit trail addresses this directly. When every interaction with a piece of communication is logged — what was watched, what was read, what comprehension prompts were completed, what questions were raised — you have actual evidence of engagement and understanding, not a guess based on whether complaints went up or down. That is the kind of MI the FCA is asking for, and it is the kind of MI that
i agree produces as a matter of course for every client interaction.
Communication Design: Beyond Shortening Text
One of the more useful things the FCA review does is explain what it means by good communication design — and it goes significantly beyond writing in plain English, which is where most firms stop.
The review describes effective communication design as layered, engaging, relevant, simple, and well-timed. Layering means presenting the most important information upfront and allowing customers to access more detail progressively, rather than front-loading everything at once. Well-timed means providing information at the point in the customer journey where it is actually relevant and actionable — not weeks before a decision needs to be made, and not buried in a document that was sent at the start of the relationship and not looked at since.
The FCA cites interactive tools, calculators, videos, walkthroughs, and clickable FAQ sections as examples of good practice. It explicitly notes that firms are using short videos, interactive diagrams, and real-time prompts to help customers understand processes and avoid common mistakes. It describes these approaches as practical technology that improves clarity and reduces the chance of misunderstanding.
What it criticises is the opposite: overlong documents with no visual hierarchy, summaries, or navigational cues. Customers being expected to locate critical information in dense text without any support. Firms committed to using plain language but with no evidence that plain language had actually been embedded in the communications going out to customers.
This is directly what the i agree consent journey is designed to solve. Key terms are presented in plain English before the customer reaches the full document. A short video summary covers the most important points — fees, obligations, cancellation rights, key risks — in a format that significantly improves retention compared to reading alone. The production effect and multimodal learning aren't just theoretical principles here; they are the mechanism by which
i agree turns a communication exercise into a comprehension exercise. The FCA is describing exactly this model as what good looks like.
Vulnerability and Accessibility: Proactive, Not Reactive
The vulnerability section of the review draws a clear distinction between firms that have embedded vulnerability identification into their processes and firms that respond to vulnerability when it becomes obvious. The FCA wants the former and found too much of the latter.
Good practice means identifying potential vulnerability at key decision points — onboarding, renewal, moments of financial stress — and adapting communication accordingly. One firm the FCA cited introduced vulnerability assessments during onboarding and at early signs of payment difficulty, producing timestamped notes visible to frontline staff and adjusting communication based on the result. The FCA notes that after carrying out several hundred assessments, the firm saw higher engagement and faster customer action — outcomes, not just process.
Poor practice means relying on general awareness rather than specific mechanisms. Several firms in the review told the FCA they had "considered vulnerability" but couldn't explain how that translated into practical changes or measurable outcomes. Others did not test communications with people who had accessibility needs, lower financial capability, or language requirements — which meant they had no way of knowing whether those groups were receiving communications they could understand.
The FCA's data is stark on this: research cited in the review found that 1 in 7 adults have literacy skills at or below those expected of a 9 to 11-year-old. 34% of adults have poor or low levels of financial numeracy. These are not edge cases. They describe a substantial portion of most regulated firms' customer bases. Designing communications that assume a level of literacy and financial confidence that a third of your customers don't have is not a compliance failure waiting to happen — it is a compliance failure that is already happening.
i agree addresses this through the structure of the consent journey itself. Clients can engage with the key terms through text, audio, or video depending on what works for them. The platform supports slower-paced journeys and lower-text modes for clients who need them. The voice and video confirmation step means that the consent record captures something active and personal — not a passive click from someone who may have scrolled past the content entirely. And because every step is logged, the firm can demonstrate that a vulnerable client received the same standard of consent process as any other client, with the same quality of evidence behind it.
Governance and Oversight: Clear Ownership of the Understanding Outcome
The governance section of the review is arguably the most important for senior leaders and compliance officers. The FCA found that many firms don't have clear accountability for the consumer understanding outcome. Decisions are made without meaningful data. MI is collected but doesn't feed into any governance process. Issues are escalated but there's no evidence of what changed as a result.
Good practice means having named senior responsibility for the consumer understanding outcome, structured governance that brings together product, operations, customer service, risk, and compliance, and regular review of MI that is specifically designed to measure comprehension — not just activity or sentiment. The FCA is explicit that KPIs should be comprehension-driven, not compliance-driven. There is a difference between a metric that tells you a communication was sent and a metric that tells you it was understood.
The review also identifies a weak feedback loop as a systemic problem. Firms were monitoring communications but not using what they found to improve them. The FCA's expectation is that monitoring leads to action, action is documented, and the impact of changes is tested and recorded. That cycle — monitor, change, test, record — is what an evidenced approach to consumer understanding looks like. Without it, firms are accumulating data but not improving outcomes.
For financial services firms using
i agree, the audit trail this generates feeds directly into that governance cycle. Engagement data, comprehension check results, question logs, and confirmation records are all timestamped and exportable. That gives compliance teams meaningful MI — the kind the FCA is asking for — rather than proxy metrics that obscure what is actually happening at the point of client interaction.
What Good Looks Like in Practice
Pulling the review together, the FCA's picture of what good looks like under the consumer understanding outcome has five consistent features.
First, information is layered and timed. The most important points are presented upfront, clearly, before the customer is asked to make any decision. Detail is available for those who want it but is not required to access the key information. Critical terms are not buried in long documents.
Second, multiple formats are used. Text alone is not sufficient for a significant proportion of customers. Video, audio, interactive tools, and visual summaries are used to present key information in ways that work for people with different literacy levels, communication preferences, and accessibility needs.
Third, comprehension is tested, not assumed. Changes to communications are tested with real customers, including those with characteristics of vulnerability. The results are measured against defined comprehension targets, not just compliance checklists. What changed and why is documented.
Fourth, vulnerability is identified proactively. Processes exist to identify potential vulnerability at key decision points. Communication is adapted based on individual need. This is embedded in governance, not handled ad hoc by frontline staff.
Fifth, there is a clear audit trail. Every significant communication, every comprehension check, every customer question and firm response is logged and timestamped. This creates evidence of understanding — not just delivery — that can be produced if the firm is ever audited, investigated, or challenged by a customer.
This is the model
i agree is built around. The difference between informed consent and a signature is exactly the difference between what the FCA is asking for and what most firms are currently providing.
i agree doesn't just capture that a customer clicked or signed. It captures what they were shown, what they watched, what they confirmed, and what questions they asked — before they agreed to anything. That is evidence of understanding. That is what the FCA's consumer understanding outcome actually requires.
Conclusion
The FCA's consumer understanding review is not a warning shot. It is a detailed description of what compliance with the consumer understanding outcome looks like, and it is clear that many firms are not there yet. The gap between measuring activity and measuring comprehension is real, it is significant, and the FCA has now made explicit that it expects firms to close it.
The good news is that the gap is closable. The technology exists to layer communications, deliver information in multiple formats, test comprehension with real customers, adapt for vulnerable individuals, and produce a timestamped audit trail of the entire process. None of this requires a fundamental rethink of how financial services firms operate. It requires a rethink of how they communicate — and how they evidence that communication.
Consumer Duty isn't a documentation problem. It never was. It is an understanding problem. And the firms that treat it as such — that build their client communications around genuine comprehension rather than defensible disclosure — are the ones that will be able to demonstrate compliance when the FCA looks more closely. Which, based on this review, it will.
References
Internal links
- FCA Consumer Duty and SRA Compliance — how
i agree helps regulated firms prove informed consent and meet consumer understanding requirements - Informed Consent for Financial Services — how
i agree is used across financial services products to evidence FCA Consumer Duty compliance - How i agree Works — the step-by-step consent journey that creates a full audit trail of client understanding
- Contract Transparency and Audit Trails — what the
i agree audit trail captures and why it satisfies regulatory evidence requirements - Consumer Understanding and Complaints — the relationship between genuine client comprehension and complaint volumes
- What is Informed Consent? — why the difference between a signature and informed consent matters under Consumer Duty
- Voice and Video Consent in the UK — how voice and video confirmation creates stronger evidence of understanding than a digital signature
- Behavioural Science and Contract Comprehension — the cognitive science behind why layered, multimodal communication improves comprehension and retention
- Reducing Complaints and Disputes — how investing in client understanding at the front end reduces formal complaints downstream
- 6 Behavioural Science Insights to Improve Business Communication — the science behind format, delivery, and why people retain information better through multimodal communication
- Are Signatures Legally Binding? Why a Signed Document Isn't Proof — why a signature cannot be treated as evidence of understanding
External links
- FCA Consumer Understanding: Good Practice and Areas for Improvement (March 2026) — the full FCA review of how regulated firms are delivering the consumer understanding outcome under Consumer Duty
- FCA Finalised Guidance FG22/5 — the FCA's full Consumer Duty guidance including the consumer understanding outcome requirements in PRIN 2A.5 and Chapter 8
- FCA Financial Lives Survey 2024 — source of the data cited in the review: 6.3 million adults with limited product understanding, 10.3 million with low financial numeracy