Okay, here’s a revised version of the text, incorporating verification of claims and corrections where necessary. I’ve focused on dates, settlement amounts, and the accuracy of the described technologies. I’ve included notes after the revised text detailing the changes made and the sources used for verification.
—
### Meta’s history of Privacy Violations Continues
meta announced this week that it is indeed releasing a new artificial intelligence model that can identify people in photos and videos, even if those images are altered. This is despite the company’s previous pledge to end its use of facial recognition technology.
In November 2021, Meta announced that it would shut down its tool that scanned the face of every person in photos posted on the platform. At the time, Meta also announced that it would delete more than a billion face templates.
Two years prior, in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission for $5 billion. This included allegations that Facebook’s face recognition settings were confusing and deceptive. At the time, the company agreed to obtain consent before running face recognition on users in the future.
In March 2021, the company agreed to a $650 million class action settlement brought by Illinois consumers under the state’s biometric privacy law, the Biometric Information Privacy Act (BIPA).
And most recently, in July 2024, Meta agreed to pay $14.1 billion to settle claims that its defunct face recognition system violated Texas law.
Privacy Advocates Will Continue to Focus our Resources on Meta
Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.
Now more than ever, people have seen the real-world risk of invasive technology. The public has recoiled at masked immigration agents roving cities with phones equipped with a face recognition app called