Anthropic has been a rare voice within the artificial intelligence (AI) industry cautioning about the downsides of the technology it develops and supporting regulation — a stance that has recently drawn the ire of the Trump administration and its allies in Silicon Valley.
While the AI company has sought to underscore areas of alignment with the administration, White House officials supporting a more hands-off approach to AI have chafed at the company’s calls for caution.
“If you have a major member of the industry step out and say, ‘Not so much. It’s OK that we get regulated. We need to figure this out at some point,’ then it makes everyone in the industry look selfish,” said Kirsten Martin, dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University.
“The narrative that this is the best thing for the industry relies upon everyone in the industry being in line,” she added.
This tension became apparent earlier this month when Anthropic co-founder Jack Clark shared a recent speech on “technological optimism and appropriate fear.” He offered the analogy of a child in a dark room afraid of the mysterious shapes around them that the light reveals to be innocuous objects.
“Now, in the year of 2025, we are the child from that story and the room is our planet,” he said. “But when we turn the light on we find ourselves gazing upon true creatures, in the form of the powerful and somewhat unpredictable AI systems of today and those that are to come.”
“And there are many people who desperately want to believe that these creatures are nothing but a pile of clothes on a chair, or a bookshelf, or a lampshade,” Clark continued. “And they want to get us to turn the light off and go back to sleep.”
Clark’s remarks were quickly met with a sharp rebuke from White House AI and crypto czar David Sacks, who accused Anthropic of “running a sophisticated regulatory capture strategy based on fearmongering” and fueling a “state regulatory frenzy that is damaging the startup ecosystem.”
He was joined by allies like venture capitalist Marc Andreessen, who replied to the post on the social platform X with “Truth.” Sunny Madra, chief operating officer and president of the AI chip startup Groq, also suggested that “one company is causing chaos for the entire industry.”
Sriram Krishnan, a White House senior policy adviser for AI, criticized the response to Sacks’s post from the AI safety community, arguing the country should instead be focused on competing with China.
Sacks later doubled down on his frustrations with Anthropic, alleging that it has been the company’s “government affairs and media strategy to position itself consistently as a foe of the Trump administration.”
He pointed to previous comments from Anthropic CEO Dario Amodei, in which he reportedly criticized President Trump, as well as op-eds that Sacks described as “attacking” the president’s tax and spending bill, Middle East deals and chip export policies.
“It’s a free country and Anthropic is welcome to its views,” Sacks added. “Oppose us all you want. We’re the side that supports free speech and open debate.”
Amodei responded last week to what he called a “recent uptick in inaccurate claims about Anthropic’s policy stances,” arguing the AI firm and the administration are largely aligned on AI policy.
“I fully believe that Anthropic, the administration, and leaders across the political spectrum want the same thing: to ensure that powerful AI technology benefits the American people and that America advances and secures its lead in AI development,” he wrote in a blog post.
He cited a $200 million Department of Defense contract Anthropic received earlier this year, in addition to the company’s support for Trump’s AI action plan and other AI-related initiatives.
Amodei also acknowledged that the company “respectfully disagreed” with a provision in Trump’s tax cut and spending megabill that sought a 10-year moratorium on state AI legislation.
In a New York Times op-ed in June, he described the push as “understandable” but argued the moratorium was “too blunt” amid AI’s rapid development, emphasizing that there was “no clear plan” at the federal level. The provision was ultimately removed from the bill by a 99-1 vote in the Senate.
He pointed to similar concerns about the lack of movement on federal AI regulation in the company’s decision to endorse California Senate Bill 53, a state measure requiring AI firms to release safety information. The bill was signed into law by California Gov. Gavin Newsom (D) late last month.
“Anthropic is committed to constructive engagement on matters of public policy,” Amodei added. “When we agree, we say so. When we don’t, we propose an alternative for consideration. We do this because we are a public benefit corporation with a mission to ensure that AI benefits everyone, and because we want to maintain America’s lead in AI.”
The recent tiff with administration officials underscores Anthropic’s distinct approach to AI in the current environment. Amodei, Clark and several other former OpenAI employees founded the AI lab in 2021, with a focus on safety. This has remained central to the company and its policy views.
“Its reputation and its brand is about that mindfulness toward risk,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University.
This has set Anthropic apart amid an increasing shift toward an accelerationist approach to AI, both inside and outside the industry, Kreps noted.
“The Anthropic approach has been fairly consistent,” she said. “In some ways, what has changed is the rest of the world, and [that] includes the U.S., which is this acceleration toward AI, and a change in the White House, where that message has also been toward acceleration rather than regulation.”
In a shift from its predecessor, the Trump administration has placed a heavy emphasis on eliminating regulations that it believes could stifle innovation and cause the U.S. to fall behind China in the AI race.
This has created tensions with states, most notably California, that have sought to pass new AI rules that could end up setting the path for the rest of the country.
“I don’t think there’s a right or wrong in this. It’s just a degree of risk aversion and risk acceptance,” Kreps added. “If you’re in Europe, it’s a lot more risk-averse. If you’re in the U.S. two years ago, it’s more risk-averse. And now, it’s just a vision that embraces some greater degree of risk.”