
Meta Platforms Inc., the parent company of social media, messaging, and virtual reality tech and apps—including Facebook, Instagram, WhatsApp, Messenger, and Threads—is facing a class action lawsuit. The $1.64 trillion company’s massive portfolio also includes Reality Labs, which produces Meta Quest headsets and Ray-Ban Meta AI glasses.
But maybe it’s gotten a little too big for its own good.
In March 2026, a class action lawsuit (Bartone v. Meta Platforms Inc., N.D. Cal., No. 3:26-cv-01897) was filed in California against Meta and eyewear manufacturer Luxottica of America Inc. A Swedish newspaper reported that workers in Kenya tasked with reviewing and labeling footage to train Meta AI models were concerned about the content transmitted. The subcontractors reported that they were viewing sexual activity, bathroom visits and nudity, financial information, credit card numbers, identifiable faces, and other deeply personal content as part of their jobs in labeling objects on videos captured on smart glasses.
The lawsuit is because Meta advertised its smart glasses as protecting consumers privacy—but plaintiffs are accusing the company of being misleading. The complaint says consumers, “did not, and could not reasonably, understand that their bedrooms, bathrooms, families, bodies, and more would be exposed to strangers around the world.” Further, no reasonable consumer would realize that, even though the company advertised that the glasses were “designed for privacy,” they could still allow workers on other continents to view sensitive personal information.
The class action lawsuit addresses plaintiffs’ concerns about who sees the videos, what the videos capture, and how the footage is used. The claims in the lawsuit include fraud, misrepresentation, breach of contract, breach of warranty, and unjust enrichment. The plaintiffs are seeking damages and an injunction requiring Meta to change its business practices and to disclose the misrepresentations to uninformed consumers.
In other words, the class action aims to hold Meta responsible for false advertising, and for failing to disclose the actual surveillance and AI data collection pipeline. Plaintiffs allege they relied on marketing claims about features that protect a user’s privacy while wearing Meta smart glasses and would not have purchased the glasses if they had known the footage was being viewed by subcontractors.
The company acknowledges that humans review video content to improve the viewer experience, though it says no identifying information is reviewed. However, it also alleges that if the user doesn’t choose to share their content with Meta, then the content stays on the user’s device.
In other words, Meta claims to keep users’ recordings private. However, features like Live AI take video and send it to contractors who train AI models. The privacy policy doesn’t specifically mention human contractors as reviewers, but it does say the footage could be used for training purposes.
The lawsuit alleges: “The undisclosed human review pipeline renders the Meta AI Glasses’ privacy features materially misleading, transforms the product from a personal device into a surveillance conduit, and exposes consumers to unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury.” In other words, the lawsuit claims that Meta's promised anonymization safeguards are either unreliable or nonexistent.
The Bartone lawsuit and the future of AI smart glasses
This lawsuit is in its earliest stages, so we don’t know what the outcome will be or how it might shape the future of AI glasses, or other AI technologies.
However, it’s just one part of growing legal and regulatory attention around smart glasses, particularly Ray-Ban Meta glasses. The privacy concerns are real, and people are focused on issues like ambient facial recognition, always-on recording capabilities, and the blurring of public and private surveillance.
- The EU already has an AI Act that classifies real-time biometric identification in public spaces as high-risk. U.S. states like Illinois (and others) already provide frameworks that plaintiffs could use to challenge unauthorized data collection from wearable AI devices.
- Any significant class action against a major manufacturer like Meta could create a chilling effect on how aggressively other companies deploy facial recognition or ambient AI features. This could potentially push the industry toward opt-in models, visible recording indicators, or on-device-only processing.
- Privacy-focused class actions have historically been catalysts that push courts and legislatures to clarify rules faster than they would otherwise.
There’s a lot we don’t know. But we do know that AI is a rapidly developing technology that’s permeating our lives daily, even when we don’t realize it. You might opt not to use standard AI apps in your phone or on your computer, but you’re still affected by the technology.
We’re going to keep an eye on how this lawsuit develops—and others that will undoubtedly emerge—so you’re always abreast of the legal implications and quandaries around everyday AI.
Meta Platforms is currently facing 28 lawsuits by plaintiffs who claim the company was aware its platforms harm users.

