Transparency is often treated as a legal process, publish a policy, link to it somewhere, and assume the job is done.
But UK and EU data protection expectations are more nuanced and demanding.
EU transparency guidance emphasise that the quality, accessibility, and comprehensibility of information is as important as including the information itself. In the UK, the ICO goes further. It states that providing the information listed in Articles 13 and 14 “will not always be enough” to satisfy the broader transparency principle. This suggests that transparency is not limited to providing information in one format, at one time and in one place.
Meta’s AI glasses capture audio/video/images in private spaces. They are, by their very nature, operating at the high-risk end of the spectrum of possible harms, that is loss of confidentiality and reputational harms. On that basis, it becomes harder to justify a disclosure model that requires a user to follow a QR code to a privacy policy, then an onward link into multiple legal pages, and then locate a single sentence about human review. The Terms of Use for Meta’s AIs state that “in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).” It is hard to believe that a consumer would firstly find that term within the policy and secondly interpret it to mean that a team of agents in Kenya can watch the recorded videos and listen to all audio recordings.
The case against Meta demonstrates that not all terms should be considered of equal importance and treated in the same way. Perhaps some should be subject to specific disclosure and different processes, given the possible ramifications.
This is envisaged to some degree by the ICO’s consumer IoT guidance notes. It recommends layered approaches, dashboards, and just-in-time notices. These are methods designed so people are likely to notice and use the information. This is especially relevant where users can enable additional features over time; the ICO even gives examples where “just-in-time” transparency should appear when users switch on a new capability like facial recognition.
This guidance relates to IoT, but the concept of a risk based view of transparency could be applied to any engagements with consumers. The higher the possible risk, the greater the obligation to ensure that the consumer is aware of and has accepted the risk. This risk-based assessment would include the context and severity of the possible risks, as well as the likely customer profile.
So the question becomes less about whether a sentence exists in a policy, and more about whether the disclosure is proportionate to risk:
If a device can capture and process life in private spaces, should key terms be presented prominently, in context, at the point of enablement, rather than discoverable only through multi-hop legal links?
Should this approach be applied more broadly across policies and agreements where the level of risk is considered a factor in the decision as to how to present information?
Meta’s spokesperson offered a brief response: "When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy,"
In light of what we now know about the way that the media/data is being processed, does the approach by Meta seem reasonable? That’s not just a regulatory question. It’s a moral one about informed choice.