Amodei is off his rocker
Dario Amodei is the CEO of Anthropic. He is considered a more safety-oriented AI executive. Anthropic’s strategy has allegedly been to develop “safe”, reliable AI and thereby encourage other firms to follow suit.
Most recently, Amodei has gone one step further toward being another Sam Altman. He wrote his blog post On Deepseek and Export Controls, which argues for greater export controls so that the US can be the first and only great power to develop artificial general intelligence (AGI). His argument goes like this:
- Americans have the best chance of deploying safe AGI.
- If we get there first, then we can take a “commanding and long-lasting lead on the global stage”
- Therefore, America must speed up AI development massively and also slow down Chinese AI companies as much as possible.
My main problem with this line of argumentation — what would the Chinese think of this? If I were Xi Jinping, my reasoning would be:
- China has the best chance of deploying safe AGI.
- If we get there first, then we can take a “commanding and long-lasting lead on the global stage”
- Therefore, China must speed up AI development massively and also slow down American AI companies as much as possible.
Sound familiar? Everyone thinks they’re the good guys. The difference between a national hero and enemy number one is simply who tells the story.
The result of both China and the US racing toward AGI is intense race dynamics. This is the worst thing in the world. Because that means any thought of safety is essentially void! Why would you slow down model development when the other country is on your heels or in the lead?
It’s as if China and the US are in a boxing match. The more punches they throw at each other, the more the crowd cheers and goes wild. So both parties think that to win you need to knock out the other person. But actually, the way to win is to disengage from the fight completely — because if you don’t, everyone in the stadium dies.
Sounds dark? Well, it’s a real possibility. We’re essentially in a game of chicken. The more advanced these models are, the more profit there is to be made off of them. And the more incentive there is to steal model weights, do corporate espionage, and not be left behind. But if the model is so advanced that it creates danger, then we’re all screwed. I wonder if AI extinction is the solution to the Fermi paradox.
The solution is not to race as fast as possible to the endgame. The solution is to coordinate to develop and spread the boons of this powerful technology. And to have some major fraction of the human resources and computing power dedicated to “the alignment problem.” Currently, for every dollar spent on alignment, there may be a hundred spent on capabilities. This is driving us very fast off a cliff.
The alignment “problem” is fundamental to capitalism. Corporations are highly intelligent, highly resourced cybernetic organisms that have corrupted the government (through money in politics), our psyches (through social media algorithms), and they may soon take over the whole world (through profit-seeking AI models). This must be stopped!
More blogs...
Here are some other recent posts: