A man identified as Daniel Moreno-Garma has been charged with throwing an incendiary device at the residence of Sam Altman and attempting to break into the San Francisco headquarters of OpenAI using a chair, while allegedly threatening to harm everyone inside the building.
Last week, a 20-year-old man was implicated in an attack on the home of OpenAI CEO Sam Altman, with law enforcement suggesting the incident may be part of a broader plan to target executives within the artificial intelligence industry. Major AI safety organizations were quick to issue statements distancing themselves from the event. However, certain online communities expressed approval of the assault.
One user on the platform X compared the alleged attacker to Luigi Mangione, who is charged with the politically motivated killing of UnitedHealth Group CEO Brian Thompson, referring to both individuals as "heroes."
Multiple users on X described the attack on Altman's home as "understandable."
A post in an anti-AI community on Reddit stated, "If the unchecked rush towards AI and the complete commodification of human values is allowed to continue, incidents like this will only become more common."
As artificial intelligence technology advances rapidly, concerns are intensifying over its potential to displace human workers, disrupt economies, harm the environment, and even pose an existential threat to humanity. Even executives within the tech industry have issued stark warnings.
However, recent attacks indicate that extreme fringe elements within the movement opposing AI are transitioning from anonymous online rhetoric to dangerous real-world actions, sparking discussions in Silicon Valley about how to respond.
The suspect in the attack on Sam Altman's residence has been denied bail.
Just three days prior to the attack on Altman's home, it was reported that the residence of Indianapolis City Councilor Ron Gibson was struck by gunfire late at night, with a note reading "No Data Centers" left at the scene. This followed the recent approval of a data center project within his district.
In recent years, there have also been reports of vandalism and attacks targeting autonomous taxis and delivery robots, which some view as precursors to a high-tech future not universally welcomed.
Doug McAdam, a Stanford University sociology professor who studies politics and social movements, commented, "Frankly, AI is such a large and imminent problem that people can't get their minds around it, leading to a general state of panic." He added that it is not uncommon for such movements to "spawn extreme offshoots."
Following the attack, OpenAI released a statement saying, "For society to harness AI correctly, we must advance this work through democratic processes, and robust debate of ideas is a vital component of a healthy democracy. However, violence against anyone has no place in our democracy, regardless of which AI lab they work for or which side of the debate they are on. We are grateful for the swift response from law enforcement and relieved that no one was injured."
"Emulate Luigi, Target Tech CEOs"
Daniel Moreno-Garma, who is currently being held without bail, was active in online spaces discussing AI risks prior to the alleged attack.
In an online exchange with a host of the AI-focused podcast "The Last Invention," Moreno-Garma spoke about "emulating Luigi against tech CEOs," an apparent reference to the suspect charged with killing the UnitedHealth Group CEO.
The organization "Pause AI," which advocates for a halt in advanced AI development to allow safety measures to catch up, confirmed that Moreno-Garma had posted on its Discord server in the weeks leading up to the attack. The group distanced itself from the incident, stating that Moreno-Garma was not a formal member and that the Discord server is open to the public.
"Pause AI" CEO Maximilien Furness told CNN, "Our purpose is to provide people with a peaceful, democratic channel to express their concerns about AI, so this attack goes against everything we stand for."
After the attack, OpenAI CEO Sam Altman posted a photo of his family on his blog, expressing hope that doing so might help deter further violence.
Another organization calling for a halt to advanced AI development, "Stop AI," stated on Tuesday that Moreno-Garma had asked on its online forum earlier this year, "Will talking about violence get me banned?" The group said that after receiving an affirmative answer, he ceased posting.
"Stop AI" posted on X, "Our organization has always adhered to non-violent activism, and the current leadership is firmly committed to non-violence in both action and speech." The group added that one of its co-founders was expelled last year for making "inflammatory statements involving violence."
According to a criminal complaint filed by the FBI, Moreno-Garma was carrying a document at the time of the attack that discussed "the purported risks AI poses to humanity," contained writings about killing Altman, and listed the "names and addresses of AI corporate board members, CEOs, and investors."
Reportedly, Moreno-Garma's defense attorney, San Francisco Public Defender Diamond Ward, argued in court this week that her client was experiencing a mental health crisis during the incident. Ward stated that her client should face, at most, charges related to "property crimes" and that the current charges are excessive. Moreno-Garma's parents issued a statement saying their son had only recently developed mental health issues, had never harmed anyone before, and that they are deeply concerned for his well-being.
A Movement Dividing
Concerns already existed within parts of the AI industry. For instance, OpenAI has long encouraged employees to remove their work badges before leaving the office.
Furness expressed concern that more violent confrontations could occur in the future and that such attacks risk stigmatizing the complex and diverse, but largely peaceful, AI safety movement.
"Our response is to double down on what we have always done—peaceful, lawful advocacy," he said. "I believe movements like ours, which are entirely peaceful, must pay close attention to developments, as darker, more extreme forces could begin to emerge."
McAdam noted that historical precedent suggests that extreme actions can sometimes enhance the credibility of more moderate factions within a social movement.
He stated that AI companies "must think carefully about their response," adding that "even as this extreme fringe is condemned, the broader movement is gaining more attention and influence."
The related debate has already begun.
OpenAI's Head of Global Policy, Chris Lehane, said in an interview with the San Francisco Standard on Tuesday that some criticism of AI is "not necessarily responsible." He stated, "When certain views and statements are put out into the world, they do have consequences." He added that the company must make it clear that AI "will be hugely beneficial for people, for families, for society as a whole."
However, his colleague, Jason Wolf, an OpenAI employee working on AI alignment, publicly expressed a differing view in a post on X on Thursday.
Wolf stated, "I believe our responsibility should be to earn trust by delivering on AI's value, being candid about risks and uncertainties, sharing research, measuring real-world impacts, and supporting public oversight and risk mitigation. Of course, I fully agree that the recent violence is abhorrent and unjustifiable, and may be spurred by a few bad actors, but lumping all AI critics together as 'doomers' and suggesting they shouldn't voice concerns is detrimental to public discourse."