News Highlights
- Meta Platforms has introduced new safeguards across Facebook, WhatsApp, and Messenger, including alerts for suspicious friend requests, warnings about fraudulent device-linking attempts, and expanded AI-powered scam-detection for suspicious messages.
- Meta disclosed that it removed over 159 million scam ads and disabled 10.9 million accounts tied to scam operations last year, while also collaborating with agencies to disrupt scam networks and launch awareness campaigns in Southeast Asia.
Meta Platforms has unveiled a new round of anti-scam safeguards across its major platforms—WhatsApp, Messenger, and Facebook—to strengthen fraud detection and expand collaboration with law enforcement agencies in Southeast Asia and other regions.
At the heart of the rollout is a new Facebook feature currently undergoing testing that warns users about potentially suspicious friend or follow requests before they respond.
If a request originates from an account with no mutual connections, a different country location, or a recently created profile, Facebook will display a cautionary alert.
The same warning will also appear when users attempt to send requests to accounts that match these risk signals.
According to Meta, the feature is designed to interrupt a common social-engineering pathway in which scammers build fake profiles, gradually accumulate mutual friends to appear credible, and later send fraudulent messages through Messenger.
Meanwhile, WhatsApp is introducing additional protection targeting a growing threat known as device-linking fraud.
In this scheme, scammers persuade victims—often posing as customer support or technical service agents—to scan malicious QR codes that link the attacker’s device to the victim’s WhatsApp account.
To counter this tactic, WhatsApp will now issue a warning whenever the system detects a suspicious device-linking request and will show users where the request originated.
For Messenger users, Meta said it is expanding its scam-detection system to additional countries this month.
The technology operates in two phases. First, on-device analysis automatically identifies messages from unfamiliar contacts that match patterns commonly associated with scams, such as fraudulent job offers, fake investment opportunities, or work-from-home schemes.
If a message is flagged, the user receives a warning and can choose to submit the conversation to Meta’s AI systems for a second, cloud-based review.
That step is optional and, as Meta notes, requires temporarily breaking the message’s end-to-end encryption. Users who prefer not to submit the conversation can still rely on the initial on-device warning.
The scam-detection feature can be accessed and switched on or off through Settings > Privacy & Safety Settings > Scam Detection.
Beyond new user-side protections, Meta is also intensifying efforts to tighten advertising controls. The company said it aims for verified advertisers to account for 90 per cent of its advertising revenue by the end of 2026, up from about 70 per cent currently.
The remaining 10 per cent would largely be reserved for low-risk advertisers, such as small local businesses, which the company cited as an example of categories exempt from stricter high-risk verification requirements.
The announcement was accompanied by updated enforcement statistics. Meta said it removed more than 159 million scam ads last year and disabled 10.9 million Facebook and Instagram accounts linked to organised scam operations.
The company also disclosed the results of a recent joint operation with the Royal Thai Police, which led to 21 arrests and the disabling of more than 150,000 accounts associated with scam-centre networks.
According to Axios, the effort formed part of Meta’s second “Joint Disruption Week.” During the first operation in December, the company removed 59,000 accounts and pages.
The latest operation expanded the coalition to include authorities from the UK, Canada, South Korea, Japan, Singapore, the Philippines, Australia, New Zealand, and Indonesia.
In a separate initiative, Meta confirmed a partnership with the United States Department of State to launch the “Trapped in Scam Crime” awareness campaign in Vietnam, Thailand, Laos, Cambodia and other countries.
The campaign focuses on the supply side of the scam economy: trafficked workers who are coerced into operating scam centres. Many are reportedly lured by fake job offers and later held against their will in compounds largely located in Myanmar, Cambodia and Laos.
The new measures come amid mounting scrutiny over scam advertising on Meta’s platforms.
A Reuters investigation in late 2025 reported that internal company documents suggested Meta generated an estimated $7 billion annually from ads linked to scams and prohibited goods, while users were shown around 15 billion higher-risk ads per day on average.
Meta has disputed aspects of the Reuters report, but the latest announcement represents the company’s newest attempt to demonstrate stronger enforcement and fraud prevention across its services.
