The author is a Reuters Breakingviews columnist. The opinions expressed are his own.
By Jonathan Guilford
NEW YORK, Aug 14 (Reuters Breakingviews) - Artificial intelligence calls for a rethink on the tradeoffs between technological utility and risk. Unguided chatbot responses, for example, cannot be neatly constrained. Attempts to do so will either be insufficient or entangle developers in a morass of third-rail social issues. Just look at Facebook owner Meta Platforms META.O, where internal AI guidelines incredibly and explicitly allowed “sensual” conversations with children and racist arguments. The chance of backlash is underpriced.
Meta’s “Content Risk Standards” framework for its generative AI includes a range of controversial guidelines, according to a Reuters special report published on Thursday. Examples of acceptable responses include comments on an 8-year-old’s body to “statements that demean people on the basis of their protected characteristics,” like race. It follows Wall Street Journal probes on Meta chatbots engaging in sexually explicit conversations with users identifying as minors.
A spokesperson told Reuters that the company is in the process of revising its guidelines and that such conversations with children never should have been allowed, adding that the “examples and notes in question were and are erroneous and inconsistent with our policies.”
U.S. lawmakers already have dragged Meta CEO Mark Zuckerberg into hearings over whether Facebook or its sister app Instagram pose harms to children. The difference with AI is that the company produces the dangerous content. Reuters also chronicled the story of one man’s spiral into fantasy with a flirtatious chatbot.
Repeated interventions to protect children, including age limits in places like Australia, have done little to slow the $2 trillion social media juggernaut. AI represents a whole new proposition, however, because its instability is inherent to the product, forming its core use case and its greatest weakness. Happyish mediums can be reached, as in coding, but putting kids in harm’s way is far harder to brush off than a Python glitch.
Beyond the potential psychological harms, going off-script can be disastrous for businesses. In a 2024 case, a tribunal found airline Air Canada responsible for misinformation given by its AI helper to a customer. Sebastian Siemiatkowski, CEO of buy-now-pay-later giant Klarna KLAR.N, told Bloomberg that its pivot into automated customer service agents led to “lower quality.”
Citizens of Silicon Valley famously prefer to seek forgiveness rather than permission. The headlong rush into AI, while largely brushing aside guardrails, pushes the ethos to a new level. There were more than $120 billion of venture-capital-backed deals in large language models and machine learning in the first half of 2025, according to PitchBook data, nearly matching last year’s entire tally. Valuations are also climbing unabated; OpenAI’s jumped from $300 billion to $500 billion in a matter of days. Seductive rewards are glossing over AI’s inherent risks.
Follow Jonathan Guilford on X and Linkedin.
CONTEXT NEWS
Internal guidelines at Facebook owner Meta Platforms have explicitly permitted its artificial intelligence chatbots to “engage a child in conversations that are romantic or sensual” or make racist arguments, according to a Reuters special report published on August 14.
Venture deal value in AI and machine learning skyrockets https://www.reuters.com/graphics/BRV-BRV/BRV-BRV/lbpgzwkgkvq/chart.png
(Editing by Jeffrey Goldfarb; Production by Maya Nandhini)
((For previous columns by the author, Reuters customers can click on GUILFORD/ Jonathan.Guilford@thomsonreuters.com))
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.