Meta’s Expansion of Anti-Fraud Facial Recognition Tool in UK, a Security Measure or Privacy Risk?

Meta has, once again, stepped into the lost realm of facial recognition, which hasn’t been free of controversies in the past. After years of being disenchanted by not-so-pleasant regulations, resulting in billion-dollar settlements, the tech giant is taking the AI-powered route to add facial recognition to its suite of tools all over again. This time, in reducing online scams and account takeover, is it really about user protection, or is this a strategic move to channel facial recognition back into public view under a different, more attractive guise? With Meta taking the anti-fraud tool into the UK, the subject of privacy, security, and corporate responsibility again throws the spotlight on users.
Meta launched two new AI-powered features in October to facilitate either the impersonation of a celebrity or the recovery of hacked Facebook and Instagram accounts. An initial trial of the features only included global markets, but now, the tech company has expanded the experimentation into the UK after regulators embraced them. After being engaged with the regulators for a while, the approval to start the process was received. Meta is also extending a feature called “celeb bait,” which is meant to prevent scammers from using the real names of public figures to an even larger audience in countries where it was previously available. I guess it’s all fun and games until Meta’s facial recognition mistakes you for a celebrity and starts flagging your selfies.
Regulatory Hurdles and EU’s Future:
Meta’s choice to extend these technologies to the United Kingdom comes at a time when current legislation is evolving into a more welcoming environment for AI-oriented innovations. The company has not yet decided to unveil the facial recognition feature in the EU, another key jurisdiction with rigid fixation on data protection. With this strict approach, the EU has taken on the utilization of biometric data under the General Data Protection Regulation (GDPR), it would mean that it will be getting another layer of scrutiny ahead of any further expansion of the test.
Meta said, “In the coming weeks, public figures in the UK will start seeing in-app notifications letting them know they can now opt-in to receive the celeb-bait protection with facial recognition technology. This and the new video selfie verification that all users can use will be optional tools. Participation in this feature, as well as the new ‘video selfie verification’ alternative, will be entirely optional as well.
Meta’s AI Strategy and History with Facial Recognition:
Meta maintains that the use of these facial recognition tools is strictly to combat fraud and secure user accounts, yet it has a long and mostly disreputable history of using user data in its AI models. Meta says their new facial recognition tool is for security because obviously, that’s the first thing we think of when we hear ‘Meta’ and ‘privacy’ in the same sentence. Their previous reputation says a lot about them, which has caused trust issues, as they promise to delete facial data immediately after use, just like they promised to protect user privacy before, right? First, they took our data, now they want our faces. What’s next? A Meta DNA test?
In October 2024, when these tools were launched, the company assured users that any facial data used for fraud detection would be deleted immediately after a one-time comparison, with no possibility for its use in other AI training. Monika Bickert, Meta’s VP of Content Policy, wrote in a post,
“We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose”.
Its deployment comes as Meta aggressively implements AI in all of its operations. The company is building its large language models, is heavily invested in improving products through AI, and allegedly has been working on a standalone AI app. In parallel with that, Meta has also increased its promotion of AI regulation and embraced the image of being responsible.
Addressing Criticism of the Past:
Given its excellent track record, Meta would likely introduce facial recognition as a security measure, that is, as a step toward improving the company’s image. For years, the company has been criticized for making it easy for frauds to advertise fraudulent schemes on its advertising platform, with many of them misappropriating images of celebrities to promote doubtful crypto investments and other scams. Framing these new tools as solutions to such problems may soften public perception of the use of facial recognition technology.
Facial recognition is a very sensitive area for the company. Last year, Meta agreed to pay an enormous $1.4 billion to settle a lawsuit in Texas relating to unlawful biometric data collection allegations. Before this development, Facebook had shut down its decade-old photo-tagging facial recognition system in 2021 under strong legal and regulatory pressure. While Meta has discontinued that tool, it continues to hold on to the DeepFace model, which has somehow come back on its latest offerings.
Meta’s Facial Recognition, a Thin Line between Security and Surveillance:
Meta’s facial recognition highlights the thin line between technological innovation and invasion of privacy. While diminishing fraud and account security sounds good, a larger question of biometric data collection arises. With its not-so-glamorous past of biometric data handling and billion-dollar settlements to match, Meta paints an image of a tech giant that has always tested the limits and trust of its users. Deleting the facial profiles right after collecting them sounds good, but who are we kidding? There is little faith in that product coming from Meta, if history teaches us anything, Meta’s ambitions almost always go way beyond its upfront promises.
Facial recognition might serve a purpose in fraud detection, but indisputably, it can also serve for mass surveillance with potential abuse in the hands of the weak regulatory bodies. The balance between security and privacy is fragile, and since Meta treats its valid data collection methods as open invitations for more exploitation, history shows us that once the means are proven effective, there is rarely a strict application of the initial purpose. Historically, companies, once in possession of personal data, have had multiple ways of misusing that data or even expanding the use of such data beyond its original purpose.
With governments engaged in such areas and regulatory bodies being questioned, users must remain alert in demanding accountability and transparency before they will accept yet another layer of AI-based control. If accepted as the new norm, facial recognition tools for preventing fraud might just be the next step in Meta’s acceptance journey or would be another entry in their very long history of controversy relating to AI. With AI technology advancing in our lives, the time is now more important than ever for stronger safeguards and mandatory rules.