
Oscar Wilde’s quip, “Life imitates art far more than art imitates life”, needs updating: replace “art” with “AI”. The Amazon page for Alexander C Karp and Nicholas W Zamiska’s new book, The Technological Republic: Hard Power, Soft Belief and the Future of the West, also lists: a “workbook” containing “key takeaways” from the volume; a second volume on how the Karp/Zamiska tome “can help you navigate life”; and a third offering another “workbook” comprising a “Master Plan for Navigating Digital Age and the Future of Society”. It is conceivable that these parasitical works were written by humans, but I wouldn’t bet on it.
Mr Karp, the lead author of the big book, is an interesting guy. He has a BA in philosophy from an American liberal arts college, a law degree from Stanford and a PhD in neoclassical social theory from Goethe University in Frankfurt. So he’s not your average geek. And yet he’s an object of obsessive interest to people both inside and outside the tech industry. Why? Because in 2003 he – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA!
The name comes from palantíri, the “seeing stones” in the Tolkien fantasies. It makes sense because the USP of Palantir is its machine-learning technology – which is apparently very good at seeing patterns in, and extracting predictions from, oceans of data. The company was founded because at the time all the Silicon Valley tech companies either disapproved of government, or were staffed by engineers who were adamantly opposed to working for the US military. This created an opening that Karp and his colleagues astutely exploited to build a company which is simultaneously appears to be booming (current market capitalisation: $200bn), while also being regarded by critics of the industry as the spawn of the devil.
Those critics will disdainfully read the book as a kind of extended tender for public sector contracts. Civil servants contemplating employing Palantir may be interested in the description of the approach its employees adopt when working in a client’s organisation. Interestingly, it’s an approach borrowed from a Toyota executive, Taiichi Ohno, as a way of getting to the root cause of a problem occurring in some part of an organisation’s operations. It’s called the “Five Whys”: ask why a problem occurred, and then ask why four more times.
“Why did an essential update to an enterprise software platform not ship by a Friday deadline?” the co-authors write. “Because the team had only two days to review the draft code. Why did they only have two days to review? Because it had lost six software engineers in the budget review cycle late last year. Why did its budget decrease? Because the head of the group had shifted priorities elsewhere at the request of another group lead. Why was the request made to shift priorities? Because a new compensation model had been rolled out incentivising growth in certain areas. Why were certain areas selected at the expanse of others? Because of an ongoing feud at the company between two senior executives.” You get the idea. It’s not rocket science. Or AI, come to that. Maybe Keir Starmer should try it out. And it’ll be cheaper than employing McKinsey.
But I digress. The argument of the book is suffused with indignation at what Karp sees as the arrogance and small-mindedness of Silicon Valley, which has collected the greatest concentration of engineering skill the world has ever seen – and then deployed it to create consumer toys and diversions that make tech founders insanely rich rather than using that talent to create technologies that would buttress the national welfare and security of the United States. What’s particularly galling to him is the fact that the wealth of Silicon Valley was built on a technological foundation that was laid – and paid for – by the state, and yet its beneficiaries appear to have nothing but contempt for government. They have prioritised consumer gratification and their own wealth-creation over everything else.
“The grandiose rallying cry of generations of founders in Silicon Valley was simply to build,” write Karp and Zamiska. “Few asked what needed to be built, and why. For decades, we have taken this focus – and indeed obsession in many cases – by the technology industry on consumer culture for granted, hardly questioning the direction, and we think misdirection, of capital and talent to the trivial and ephemeral. Much of what passes for innovation today, of what attracts enormous amounts of talent and funding, will be forgotten before the decade is out.”
Underpinning much of the book’s lamentations are two enduring themes. The first is a kind of nostalgic longing for the wartime and postwar collaboration between the American state and the scientists and engineers which made the US a technological colossus. For Karp, as for many other thinkers like him (including the UK’s own Dominic Cummings), the Manhattan Project that created the atomic bomb looks like a lost nirvana.
The second theme is a chronicle of what the authors call “The Hollowing Out of the American Mind”: the abandonment of belief, the agnosticism of technology, the “assumption that the correctness of one’s views from a moral or ethical perspective precludes the need to engage with the more distasteful and fundamental question of relative power with respect to a geopolitical opponent, and specifically which party has a superior ability to inflict harm on the other. The wishfulness of the current moment and many of its political leaders may in the end be their undoing.” This is the “soft belief” of the book’s subtitle, and it’s why this section of the book sometimes evokes echoes of the conservative philosopher Allan Bloom on song.
There’s a lot of hegemonic anxiety in Karp’s musings. For him, American primacy is the key to the survival of the civilisational values that he reveres. He’s also a disciple of the Nobel laureate economist Thomas Schelling, and shares his view that “to be coercive, violence has to be anticipated… The power to hurt is bargaining power. To exploit it is diplomacy – vicious diplomacy, but diplomacy.”
But the power to hurt is a prerogative of “hard” (ie military) power, and Karp seems particularly incensed by what he sees as the “precious” reservations of Google employees about the possibility that their technologies might be put into military hands. (It may also have been one of the motivations for the founding of Palantir.) His irritation seems unduly harsh to me. All of these employees (and their parents and grandparents) have lived through an era in which the idea that the United States might again be involved in an all-out war seemed as preposterous as the idea that their inventions might be used in battle. In that sense, the west has been on an 80-year-long holiday from history, from which Putin has rudely awoken us.
The lesson that Karp and his co-author draw from all this is that “a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war.” Or, to put it more dramatically, maybe the arrival of AI makes this our “Oppenheimer moment”.
In the summer of 1939, Albert Einstein and Leo Szilard sent a letter to President Roosevelt, urging him to explore the construction of an atomic bomb – and quickly. The rapid advances in the technology, the two scientists wrote, “seem to call for watchfulness and, if necessary, quick action on the part of the administration”, as well as as a sustained partnership with “permanent contact maintained between the administration” and physicists.
In that historical context, maybe the arrival of this book is timely. For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it’s an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in “civilian” life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.
On the other hand, some of the west’s adversaries (Russia, China) are already using this technology against us, and we urgently need to tool up to address the threat. When these thoughts were put to Mr Karp by a New York Times reporter, he replied: “I think a lot of the issues come back to: ‘Are we in a dangerous world where you have to invest in these things?’ And I come down to yes. All these technologies are dangerous. The only solution to stop AI abuse is to use AI.” Hobson’s choice, in other words.
The Technological Republic: Hard Power, Soft Belief and the Future of the West is published by The Bodley Head (£25). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply
• This article was amended on 2 March 2025 to correct the spelling of Nicholas W Zamiska’s surname.
