Current Buzz Spot

Opinion | Sam Altman's Imperial Reach


Opinion | Sam Altman's Imperial Reach

The tech leader's boundless ambition is putting AI, and the world, on a dangerous path.

Robert Wright, whose books include "Nonzero" and "The Evolution of God," is publisher of the Nonzero Newsletter and host of the "Nonzero" podcast. This essay first appeared in the Nonzero Newsletter.

In 2008, Paul Graham, a co-founder of the Silicon Valley firm Y Combinator, described Sam Altman, who was then 23 years old, like this: "You could parachute him into an island full of cannibals and come back in five years and he'd be the king." In 2011, Graham made Altman a partner at Y Combinator. Three years later, Graham stepped down as president of the company and crowned Altman as his successor.

It soon became clear that the Y Combinator island wasn't big enough to contain Altman's ambitions. In 2015, while still president of the company, he co-founded a nonprofit called OpenAI and became co-chair alongside Elon Musk. Within a few years, Altman and Musk were having disagreements -- over, for example, who was the alpha male. Musk left the OpenAI island and Altman settled in to run it.

I don't think I'd be good at parachuting into cannibal-inhabited islands and securing political control of them, but I imagine that, if I were good at it, I'd follow this algorithm:

Be extremely nice and accommodating for a while and gradually win the trust of the natives, who will thus cede increasing amounts of influence to you, until you have so much influence that you can drop the act and reveal your true ambitions. At that point, you can eat them.

In this light, it's worth revisiting Altman's congressional testimony in May 2023, shortly after ChatGPT had captured the world's attention. He was a picture of humility and cooperation. He professed acute awareness of artificial intelligence's dangers and encouraged the regulation of OpenAI and other such companies. For a CEO to issue this kind of invitation was so unusual that it became the headline of the New York Times story about the hearings: "OpenAI's Sam Altman Urges A.I. Regulation in Senate Hearing."

Sen. Richard Blumenthal (D-Conn.), who chaired the hearings, said, "It's so refreshing. He was willing, able, and eager."

Not to mention pure of heart! When one senator asked Altman whether he made a lot of money from AI, he replied, "No. I'm paid enough for health insurance. I have no equity in OpenAI. ... I'm doing this because I love it."

Fast forward to now.

Last week, California Gov. Gavin Newsom (D) vetoed an AI regulation bill that OpenAI opposed even though it had been watered down to a point where Anthropic, a rival of OpenAI, had dropped its initial opposition. And the week before that, with OpenAI poised to close an investment round that would bring in $6.6 billion at a valuation of $157 billion, we learned that the company plans to become a fully for-profit corporation (having converted to a quasi-for-profit one in 2019). And Altman could now get equity in OpenAI -- around $10 billion worth, according to one report.

This last bit of news, in particular, triggered something like an online festival for Altman haters. Not content to just quote his congressional testimony about the irrelevance of money to his motivational structure, they circulated the video of it. Which really is worth watching, because you haven't seen pious until you've seen Sam Altman do pious:

I take issue with these Altman haters; I think they're hating on the wrong part of Altman. What's scary about him isn't that he's good at getting rich (he's a billionaire even without any OpenAI equity), but that, as Graham told a journalist in 2016, "Sam is extremely good at getting powerful." I think he's using that power -- in Silicon Valley and in D.C. and in various centers of influence around the world -- to put the AI industry, and the world, on a dangerous course. Sometimes, you even get the impression that he's chosen this course because it would give him more power, and that the rest of us are just along for the ride.

How to describe this course? Though Altman (wisely) wouldn't use this term for it, I'd say it boils down to accelerationism -- the idea that, when it comes to technological change, and progress in AI in particular, faster is better.

There was a time when Altman sounded like the opposite of an accelerationist. In 2022, he told journalist Steven Johnson that OpenAI had been able to roll out GPT-3 slowly and carefully because the company wasn't beholden to investors who sought "unlimited profit." (This was back when OpenAI, having gone from nonprofit to quasi-for-profit, put a ceiling on how big a return investors could get -- a ceiling that will be removed if indeed the company now goes fully for-profit.) Altman continued: ''I think it lets us be more thoughtful and more deliberate about safety issues. Part of our strategy is: Gradual change in the world is better than sudden change.''

I agree! AI can definitely wind up being a net plus in the long run. But in the shorter run, it's going to change so many parts of society so dramatically that serious dislocation and destabilization are likely. If we could slow the pace of progress by even a little, that would give us valuable time for social and cultural adaptation -- and, of course, more time to assess and address specific AI safety issues.

Unfortunately, Altman appears to have changed his tune since 2022. Now, he says, we need to embark on the headlong construction of new data centers and power plants and chipmaking factories, which will accelerate the development and deployment of AI.

And he seems to want to place himself at the center of these efforts. In February, the Wall Street Journal reported that Altman was "in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity [and] expand its ability to power AI. ... The project could require raising as much as $5 trillion to $7 trillion, one of the people said." That's more than the gross domestic product of Germany.

And two weeks ago, Bloomberg reported that Altman had been at the White House sharing his thoughts about America's future AI infrastructure: "OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities." Citing reports that Altman recommended five to seven data centers, each requiring around 5 gigawatts of power, Bloomberg wrote: "To put that in context, 5 [gigawatts] is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes."

Why does Altman now think we need to move so fast? The answer depends on who he's talking to.

When at the White House, apparently, Altman trots out a kind of cold war rationale. The Bloomberg article said OpenAI was "framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China."

But in other contexts, Altman sounds less like a cold warrior and more like Gandhi. In a visionary piece he posted two weeks ago, Altman wrote: "If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people."

I'm not a hardcore AI doomer, and I don't advocate an AI "pause" -- a moratorium on training the biggest AI models. Sure, a brief pause might be nice, but it's not feasible. Still, it seems to me that:

1. We should, at a minimum, refrain from artificially stimulating the rate of AI progress via government subsidies -- subsidies that, politics being politics, any infrastructure project of the kind Altman is trying to sell to the White House would almost certainly involve.

2. We shouldn't forget about climate change. The big AI companies like to finesse this issue by saying they're trying to use green energy sources and, when they can't, will compensate by helping to finance future green energy projects. But the compensation is only partial, and most green energy sources they use in the nearer term are ones that someone else would have used if they hadn't. There's no getting around the fact that any big increase in aggregate energy consumption significantly increases greenhouse gases. Google's annual emissions have jumped by nearly 50 percent since 2019, and AI is the main reason.

3. We should recognize the sense in which AI's power consumption is a feature not a bug -- at least if you share my view that slowing the evolution of AI by an increment or two would be nice. By imposing a special tax on power consumed by AI data centers, we could effect such a slowdown -- and use the revenue to fight climate change, work on AI safety, etc.

There was a time, by the way, when the most common political message you heard in Silicon Valley was about the need to fight climate change. But that was back when huge power consumption was something old-fashioned Rust Belt industries did, whereas software was a "clean" technology. Things have changed.

In 2015, Altman wrote: "Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity." So why does Altman now seem to be working 24/7 to hasten the coming of superhuman machine intelligence?

Ten months ago -- back when Altman's accelerationist tendencies were getting conspicuous, but before he had shifted into overdrive by launching his AI infrastructure crusade -- I offered a theory about that. The basic idea was that Altman thought that if he was the person who ran the first company to reach superintelligence, things could work out okay. After all, he had also written in 2015: "In an ideal world, regulation would slow down the bad guys and speed up the good guys -- it seems like what happens with the first SMI to be developed will be very important." No doubt Altman, like the rest of us, considers himself a good guy.

But now I'm toying with another theory: Altman penned those existential concerns about superintelligence 10 months before he and Musk and a few others co-founded OpenAI. And those concerns seem to echo Musk's own fears of the time. Is it possible that getting Musk to cough up the money that would get OpenAI off the ground was already on Altman's agenda? And that, in pursuit of that goal, he was stressing the commonality between his world view and Musk's?

Who knows? But that's how I'd start out if I was planning to parachute into an Elon-inhabited island.

Previous articleNext article

POPULAR CATEGORY

business

3146

general

4131

health

3076

sports

4183