Scan barcode
A review by archytas
Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson
challenging
reflective
medium-paced
4.25
"Hassabis believed so fervently in the transformative effects of AGI that he told DeepMind’s staff they wouldn’t have to worry about making money in about five years, because AGI would make the economy obsolete, former employees say. That eventually became mainstream thinking among the senior managers.."
This book is terrifying. It is also the recent read I have become most evangelical about - I might buy copies and give them out. That's not because it particularly well written, Olson has an incline towards the schtick which irritates me slightly, but because the world of Silicon Valley Meglomaniacs (my words, not hers) she writes about has so very, very much power.
It was upon reading this book that I realised Musk is not as singular as you would assume. Even in a cohort of delusional billionaires, he stands out as unstable, but you can see that he is marinating in a world in which nuance, accountability and well, reality, have long disappeared in the rear-view mirror. It would funny except that the impact of their actions is very, very real.
Unlike the others in the crop of new books about AI, Supremacy doesn't seek to explain or focus on the technology. It focuses on the people in charge of it - specifically Sam Altman from-ish Open AI and Demis Hassabis from (Google) DeepMind. Both of whom are shown as passionately driven by their beliefs that AI technology could save or destroy the world. Their relative arcs are fascinating. Altman starts out in the free-market start up culture, morphs into a doom-predictor with AI, determined to develop it in a 'safe' way, and lands as a nascent billionaire willing to abandon all controls in service of megaprofits for Microsoft. Hassabis starts in the idealistic British games industry, morphs into a research-driven messiah of AI, convinced it will create a utopia on Earth by, y'know, the middle of this century tops, and then lands as a nascent billionaire willing to abandon all controls in service of megaprofits for Google. So much to think about here.
Now to be clear, I am not terrified because I think HAL is coming for us all. There is irony in that what these men have built are machines so far most successful at bland-ifying human life, and yes, automating some tedious or previously impossible tasks. We are not close to the singularity. But to build those machines has involved staggering amounts of climate damage, billions of hours of criminally underpaid labour, and thats not talking about the child miners of the minerals necessary. And deploying that technology has led to a new era of embedded racist, sexist and classist bias in government and private systems, which threatens to drastically widen existing inequalities just as our social, environmental cohesion fractures.
But to Altman and Hassabis, and their ilk - Musk, Bezos, Page, Theil regular appearances, as well as Mustafa Suleyman and many others - these issues are like specks of dust on the window. It isn't even that they dismiss them - they simply don't see them, so insignificant do they seem in the grand world quest of achieving utopia. This is a world where debating whether you are Oppenheimer or Einstein is normal (yes, it did make me feel very differently about that film, these people have an Oppenheimer *thing*) but checking what impact your actual technology is having actually today is not. And it is not so surprising that in the end, the world's largest tech firms, who care for nothing but making money, are able to swoop in and tie all the idealism up into a mega-disruption-profit-generating machine.
The most cogent section of the book is where Olson explains the ethical framework most of these bros belong to. "Effective altruism" is an ideology with a tagline of "shut up and multiply". It offers a "simple, rational" pathway to decision making based on math. Work out how many lives you can save, more lives, better person. And the way to get the highest numbers is to save humanity from extinction because that is like, trillions of lives. Nothing else competes. You would think this might motivate at least climate change as a priority and for some, including at least for a while, Musk, it does. But others see climate change as too specific (yes, really). Saving us from rogue AI? closer. Perfecting utopia? Bingo! Which is probably why, Olson tells us, the Future of Life Institute's single biggest grant - $25million from a crypto magnate - is more than the combined annual budget of all other AI ethics charities (the European Digital Rights initiative that focuses on bias in algorithms gets #2.2 m a year). The Future of Life's mission is to stop sentient AI-controlling weapons.
There are, I should say, some heroes who emerge from this narrative. All of them (I can remember) are women, with Timnit Gebru leading the pack, and the Open AI board members probably in the rear. That they are ultimately only effective once outside the various institutions and clubs indicates how uniformly terrible this world is.
But is a glorious study of capitalism. In the end, it is the motivation to make money - to sell people stuff they don't need; and to sell other companies stuff they think will make other people buy stuff they don't need, and yes, to sell companies stuff that they think will cut their costs. Which it probably will - and we'll all be living with robotic voices and robotic art and terrible information even if that is presented in more naturalistic version than we usually identify with robots. And that isn't starting on the potential to isolate us from each other. (I might be coming across like I think the technology is evil - I really don't. I have advocated early adoption for things that will bring joy and knowledge. But I do think if subject to the current model, bland and inaccurate is what will generate the most profit, and that it therefore what we will get). The megalomania makes these billionaires more pliable to the dynamics of profit-making, which is its own kind of AI - a dynamic that guides the people caught in its web. The sky high salaries of Silicon Valley aren't just competitive - they work to create an elite so removed from daily life that they lose any sense of how much suffering exists now, nevermind how much if it is caused by their software. When you are the most powerful person in your daily interactions, you lose the feedback loop that generates accountability. You also lose connection, and you lose community. In many ways, these are tragic figures, but not in the evil-genius way they fear.
This book is terrifying. It is also the recent read I have become most evangelical about - I might buy copies and give them out. That's not because it particularly well written, Olson has an incline towards the schtick which irritates me slightly, but because the world of Silicon Valley Meglomaniacs (my words, not hers) she writes about has so very, very much power.
It was upon reading this book that I realised Musk is not as singular as you would assume. Even in a cohort of delusional billionaires, he stands out as unstable, but you can see that he is marinating in a world in which nuance, accountability and well, reality, have long disappeared in the rear-view mirror. It would funny except that the impact of their actions is very, very real.
Unlike the others in the crop of new books about AI, Supremacy doesn't seek to explain or focus on the technology. It focuses on the people in charge of it - specifically Sam Altman from-ish Open AI and Demis Hassabis from (Google) DeepMind. Both of whom are shown as passionately driven by their beliefs that AI technology could save or destroy the world. Their relative arcs are fascinating. Altman starts out in the free-market start up culture, morphs into a doom-predictor with AI, determined to develop it in a 'safe' way, and lands as a nascent billionaire willing to abandon all controls in service of megaprofits for Microsoft. Hassabis starts in the idealistic British games industry, morphs into a research-driven messiah of AI, convinced it will create a utopia on Earth by, y'know, the middle of this century tops, and then lands as a nascent billionaire willing to abandon all controls in service of megaprofits for Google. So much to think about here.
Now to be clear, I am not terrified because I think HAL is coming for us all. There is irony in that what these men have built are machines so far most successful at bland-ifying human life, and yes, automating some tedious or previously impossible tasks. We are not close to the singularity. But to build those machines has involved staggering amounts of climate damage, billions of hours of criminally underpaid labour, and thats not talking about the child miners of the minerals necessary. And deploying that technology has led to a new era of embedded racist, sexist and classist bias in government and private systems, which threatens to drastically widen existing inequalities just as our social, environmental cohesion fractures.
But to Altman and Hassabis, and their ilk - Musk, Bezos, Page, Theil regular appearances, as well as Mustafa Suleyman and many others - these issues are like specks of dust on the window. It isn't even that they dismiss them - they simply don't see them, so insignificant do they seem in the grand world quest of achieving utopia. This is a world where debating whether you are Oppenheimer or Einstein is normal (yes, it did make me feel very differently about that film, these people have an Oppenheimer *thing*) but checking what impact your actual technology is having actually today is not. And it is not so surprising that in the end, the world's largest tech firms, who care for nothing but making money, are able to swoop in and tie all the idealism up into a mega-disruption-profit-generating machine.
The most cogent section of the book is where Olson explains the ethical framework most of these bros belong to. "Effective altruism" is an ideology with a tagline of "shut up and multiply". It offers a "simple, rational" pathway to decision making based on math. Work out how many lives you can save, more lives, better person. And the way to get the highest numbers is to save humanity from extinction because that is like, trillions of lives. Nothing else competes. You would think this might motivate at least climate change as a priority and for some, including at least for a while, Musk, it does. But others see climate change as too specific (yes, really). Saving us from rogue AI? closer. Perfecting utopia? Bingo! Which is probably why, Olson tells us, the Future of Life Institute's single biggest grant - $25million from a crypto magnate - is more than the combined annual budget of all other AI ethics charities (the European Digital Rights initiative that focuses on bias in algorithms gets #2.2 m a year). The Future of Life's mission is to stop sentient AI-controlling weapons.
There are, I should say, some heroes who emerge from this narrative. All of them (I can remember) are women, with Timnit Gebru leading the pack, and the Open AI board members probably in the rear. That they are ultimately only effective once outside the various institutions and clubs indicates how uniformly terrible this world is.
But is a glorious study of capitalism. In the end, it is the motivation to make money - to sell people stuff they don't need; and to sell other companies stuff they think will make other people buy stuff they don't need, and yes, to sell companies stuff that they think will cut their costs. Which it probably will - and we'll all be living with robotic voices and robotic art and terrible information even if that is presented in more naturalistic version than we usually identify with robots. And that isn't starting on the potential to isolate us from each other. (I might be coming across like I think the technology is evil - I really don't. I have advocated early adoption for things that will bring joy and knowledge. But I do think if subject to the current model, bland and inaccurate is what will generate the most profit, and that it therefore what we will get). The megalomania makes these billionaires more pliable to the dynamics of profit-making, which is its own kind of AI - a dynamic that guides the people caught in its web. The sky high salaries of Silicon Valley aren't just competitive - they work to create an elite so removed from daily life that they lose any sense of how much suffering exists now, nevermind how much if it is caused by their software. When you are the most powerful person in your daily interactions, you lose the feedback loop that generates accountability. You also lose connection, and you lose community. In many ways, these are tragic figures, but not in the evil-genius way they fear.