{"id":1982,"date":"2026-03-02T15:46:23","date_gmt":"2026-03-02T05:46:23","guid":{"rendered":"https:\/\/rhinoeasy.com\/?p=1982"},"modified":"2026-03-02T15:46:23","modified_gmt":"2026-03-02T05:46:23","slug":"the-trap-anthropic-built-for-itself-techcrunch","status":"publish","type":"post","link":"https:\/\/rhinoeasy.com\/?p=1982","title":{"rendered":"The trap Anthropic built for itself | TechCrunch"},"content":{"rendered":"<p>On Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen. President Trump had posted on Truth Social directing every federal agency to \u201cimmediately cease all use of Anthropic technology,\u201d the San Francisco AI company co-founded in 2021 by CEO Dario Amodei. Defense Secretary Pete Hegseth soon after invoked a national security law to blacklist the company from doing business with the Pentagon, along with its partners, contractors, and suppliers. The reason? Amodei refused to allow Anthropic\u2019s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input.<\/p>\n<p>It was a jaw-dropping sequence of events. Anthropic stands to lose a contract worth up to $200 million and could be barred from working with other defense contractors. (Anthropic has since said it will challenge the Pentagon in court.)<\/p>\n<p>Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world\u2019s ability to govern them. The MIT physicist founded the Future of Life Institute in 2014 and in 2023 helped organize an open letter \u2014 ultimately signed by more than 33,000 people, including Elon Musk \u2014 calling for a pause in advanced AI development.<\/p>\n<p>His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark\u2019s argument doesn\u2019t begin with the Pentagon but with a decision made years earlier \u2014 a choice, shared across the industry, to resist regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped the central tenet of its own safety pledge \u2014 its promise not to release increasingly powerful AI systems until the company was confident they wouldn\u2019t cause harm.<\/p>\n<p>Now, in the absence of rules, there\u2019s not a lot to protect these players, says Tegmark. Here\u2019s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch\u2019s StrictlyVC Download podcast.<\/p>\n<p>When you saw this news just now about Anthropic, what was your first reaction?<\/p>\n<p>The road to hell is paved with good intentions. It\u2019s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously \u2014 without any human input at all \u2014 decide who gets killed.<\/p>\n<p>Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately<\/p>\n<p>Offer ends March 13. San Francisco, CA | REGISTER NOW<\/p>\n<p>Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that\u2019s at all contradictory?<\/p>\n<p>It is contradictory. If I can give a little cynical take on this \u2014 yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises.<\/p>\n<p>First we had Google \u2014 this big slogan, \u201cDon\u2019t be evil.\u201d Then they dropped that. Then they dropped another longer commitment that basically said they promised not to do harm with AI. They dropped that so they could sell AI for surveillance and weapons. OpenAI just dropped the word safety from their mission statement. xAI shut down their whole safety team. And now Anthropic, earlier in the week, dropped their most important safety commitment \u2014 the promise not to release powerful AI systems until they were sure they weren\u2019t going to cause harm.<\/p>\n<p>How did companies that made such prominent safety commitments end up in this position?<\/p>\n<p>All of these companies, especially OpenAI and Google DeepMind but to some extent also Anthropic, have persistently lobbied against regulation of AI, saying, \u201cJust trust us, we\u2019re going to regulate ourselves.\u201d And they\u2019ve successfully lobbied. So we right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won\u2019t let you sell any sandwiches until you fix it. But if you say, \u201cDon\u2019t worry, I\u2019m not going to sell sandwiches, I\u2019m going to sell AI girlfriends for 11-year-olds, and they\u2019ve been linked to suicides in the past, and then I\u2019m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine\u201d \u2014 the inspector has to say, \u201cFine, go ahead, just don\u2019t sell sandwiches.\u201d<\/p>\n<p>There\u2019s food safety regulation and no AI regulation.<\/p>\n<p>And this, I feel, all of these companies really share the blame for. Because if they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, \u201cPlease take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors\u201d \u2014 this would have happened. Instead, we\u2019re in a complete regulatory vacuum. And we know what happens when there\u2019s a complete corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it\u2019s sort of ironic that their own resistance to having laws saying what\u2019s okay and not okay to do with AI is now coming back and biting them.<\/p>\n<p>There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it. If the companies themselves had earlier come out and said, \u201cWe want this law,\u201d they wouldn\u2019t be in this pickle. They really shot themselves in the foot.<\/p>\n<p>The companies\u2019 counter-argument is always the race with China \u2014 if American companies don\u2019t do such and such, Beijing will. Does that argument hold?<\/p>\n<p>Let\u2019s analyze that. The most common talking point from the lobbyists for the AI companies \u2014 they\u2019re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined \u2014 is that whenever anyone proposes any kind of regulation, they say, \u201cBut China.\u201d So let\u2019s look at that. China is in the process of banning AI girlfriends outright. Not just age limits \u2014 they\u2019re looking at banning all anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it\u2019s making American youth weak, too.<\/p>\n<p>And when people say we have to race to build superintelligence so we can win against China \u2014 when we don\u2019t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines \u2014 guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It\u2019s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat.<\/p>\n<p>That\u2019s compelling framing \u2014 superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington?<\/p>\n<p>I think if people in the national security community listen to Dario Amodei describe his vision \u2014 he\u2019s given a famous speech where he says we\u2019ll soon have a country of geniuses in a data center \u2014 they might start thinking: \u201cWait, did Dario just use the word country? Maybe I should put that country of geniuses in a data center on the same threat list I\u2019m keeping tabs on, because that sounds threatening to the U.S. government.\u201d And I think fairly soon, enough people in the U.S. national security community are going to realize that uncontrollable superintelligence is a threat, not a tool. This is totally analogous to the Cold War. There was a race for dominance \u2014 economic and military \u2014 against the Soviet Union. We Americans won that one without ever engaging in the second race, which was to see who could put the most nuclear craters in the other superpower. People realized that was just suicide. No one wins. The same logic applies here.<\/p>\n<p>What does all of this mean for the pace of AI development more broadly? And how close do you think we are to the systems you\u2019re describing?<\/p>\n<p>Six years ago, almost every expert in AI I knew predicted we were decades away from having AI that could master language and knowledge at human level \u2014 maybe 2040, maybe 2050. They were all wrong, because we already have that now. We\u2019ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is about as difficult as human tasks get. I wrote a paper together with Yoshua Bengio, Dan Hendrycks, and other top AI researchers just a few months ago giving a rigorous definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we\u2019re not there yet, but going from 27% to 57% that quickly suggests it might not be that long.<\/p>\n<p>When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore. It\u2019s certainly not too soon to start preparing for it.<\/p>\n<p>Anthropic is now blacklisted. I\u2019m curious to see what happens next \u2014 will the other AI giants stand with it and say, \u201cWe won\u2019t do this either?\u201d Or does someone like xAI raise their hand and say, \u201cAnthropic didn\u2019t want that contract, we\u2019ll take it\u201d? [Editor\u2019s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]<\/p>\n<p>Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for the courage of saying that. Google, as of when we started this interview, had said nothing. If they just stay quiet, I think that\u2019s incredibly embarrassing for them as a company, and a lot of their staff will feel the same. We haven\u2019t heard anything from xAI yet either. So it\u2019ll be interesting to see. Basically, there\u2019s this moment where everybody has to show their true colors.<\/p>\n<p>Is there a version of this where the outcome is actually good?<\/p>\n<p>Yes, and this is why I\u2019m actually optimistic in a strange way. There\u2019s such an obvious alternative here. If we just start treating AI companies like any other companies \u2014 drop the corporate amnesty \u2014 they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That\u2019s not the path we\u2019re on right now. But it could be.<\/p>\n<p><strong>Source: RhinoEasy News<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>On Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen. President Trump<\/p>\n","protected":false},"author":1,"featured_media":1981,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[6],"tags":[],"class_list":["post-1982","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech"],"_links":{"self":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/posts\/1982","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1982"}],"version-history":[{"count":0,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/posts\/1982\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/media\/1981"}],"wp:attachment":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1982"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1982"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1982"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}