Meta is currently facing a lawsuit in the U.S. after reports showed that overseas workers were reviewing private footage from its AI‑powered smart glasses. A Swedish investigation found that contractors in Kenya viewed recordings containing nudity and other personal moments, raising concerns about how Meta manages sensitive user data.
The lawsuit, filed by two U.S. consumers, argues that Meta misled buyers by promoting the glasses as “designed for privacy” and “controlled by you”. Users believed their recordings were secure, but the investigation revealed that footage from more than 7 million glasses sold in 2025 entered a review system with no option to opt out.
Meta says content remains on the device unless shared with Meta AI. However, sources reported that the company’s face‑blurring tools often failed, allowing contractors to clearly see private moments.
As a result, the U.K.’s Information Commissioner’s Office has opened an inquiry into how Meta takes care of footage from the glasses and whether or not they inform users about the possibility of humans reviewing their recordings in advance.
The case sheds light on the broader issue of privacy in the world of AI, where people often fail to understand how their data is captured or used. The lawsuit, as such, increases pressure on Meta to strengthen its privacy practices.
