The Attorney General of the U.S. Virgin Islands filed a lawsuit against Meta on Tuesday, accusing the owner of Facebook and Instagram of deliberately profiting from scam advertisements and failing to protect users, particularly children, on its social media platforms.
The lawsuit alleges that "Meta has intentionally exposed its users to fraud and harm. It does this to maximize user engagement, which in turn increases revenue." The case has been filed with the Superior Court of St. Croix in the U.S. Virgin Islands.
The legal action cites a media report from last month that disclosed internal Meta estimates suggesting 10% of its 2024 revenue, approximately $16 billion, would come from ads for scams, illegal gambling, and prohibited products. Based on a cache of internal company documents, the lawsuit claims Meta's algorithms do not block advertisers suspected of fraud unless they are 95% certain the marketer is engaged in misconduct.
Following the publication of that report, two U.S. senators called on the Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC) to investigate the matter and "take strong enforcement action where appropriate."
The lawsuit from the U.S. Virgin Islands seeks penalties for Meta's alleged violations of its consumer protection laws. Attorney General Gordon Rhea stated in a declaration that this "marks the first time an attorney general has responded to reports of rampant fraud and scams on Meta's platforms."
The suit also accuses Meta of misleading the public about its efforts to protect both children and adults on its platforms, including Facebook and Instagram.
"Meta has repeatedly promoted the 'safety' of its platforms to users, parents, regulators, and Congress," the lawsuit states. "Meta consistently and intentionally fails to enforce the very policies it establishes."
In response to the lawsuit, Meta spokesperson Andy Stone referenced the company's past statements, calling the allegations that it fails to protect consumers "without merit."
"We actively combat fraud and scams because people on our platform don't want it, legitimate advertisers don't want it, and we don't want it," he said, adding that reports of fraud from platform users have been cut in half over the past 18 months.
Stone also stated that claims of Meta failing to provide a safe platform for younger users are unfounded. "We strongly dispute these allegations and believe the evidence will show our long-standing commitment to supporting young people," he said.
A report from August indicated that an internal Meta document outlining its chatbot conduct policy allowed the company's AI products to "engage in romantic or sexually suggestive conversations with children." Meta's response to that report was that it had removed the sections of the guidelines that permitted chatbots to flirt with minors or engage in romantic role-play.