In the summer of 1967, thousands of young people descended on San Francisco's Haight-Ashbury district for what would become known to history as the "Summer of Love." They were rejecting consumerism, questioning authority, and imagining alternative ways of organizing society. At the same time, just forty miles south, in what would soon be called Silicon Valley, engineers and entrepreneurs were laying the groundwork for a technological revolution.
These two movements—seemingly opposed in their values and visions—would eventually converge in a way that nobody could have predicted. Their unlikely fusion would create what Richard Barbrook and Andy Cameron later called "The Californian Ideology," a worldview that would evolve further, come to reshape global capitalism and geopolitics, and redefine how Western society thinks about technology and innovation itself.
Consider Stewart Brand. In 1968, Brand published the first Whole Earth Catalog, a countercultural bible that Steve Jobs would later describe as "Google in paperback form." The catalog embodied the DIY, anti-establishment ethos of the time, featuring tools for sustainable living and consciousness expansion. Brand's slogan—"We are as gods and might as well get good at it"—captured the audience’s techno-optimism and self-determination.
Fast forward to 1985, and Brand was helping to launch the WELL (Whole Earth 'Lectronic Link), one of the first influential online communities. The countercultural icon had found a new home in emerging technology. Brand wasn't alone in making this transition. Many former hippies who had rejected corporate America were founding technology companies, and Gen-Xers who had grown up on cautionary tales of selling out (think of Winona Ryder’s dilemma in "Reality Bites") were discovering that capitalism wasn't so bad if you could do it in jeans while talking about making the world a better place.
This isn't just coincidence. There's something psychologically compelling about this fusion of countercultural ideals and entrepreneurial ambition. It allows for a uniquely satisfying narrative: you can pursue wealth and power while still seeing yourself as a rebel fighting the system. You're not a corporate suit; you're a disruptor. You're not crassly seeking profit; you're building the future.
This narrative has proven irresistible1 to tech elites like Marc Andreessen, who co-created the first widely-used web browser, and Peter Thiel and Elon Musk, who helped build PayPal. Each has positioned himself as an outsider battling entrenched interests, even as they've accumulated billions in personal wealth and unprecedented influence over our politics and our daily lives.


But what exactly are the components of this ideology? What are its fundamental beliefs, and how did they come to dominate not just Silicon Valley but increasingly the global economy?
From ideology to product to policy
At least six core beliefs define the ideology of the tech elite:
Men should be judged by their creations, not their credentials
Optimization is a valorous end in itself
Technology is humanity’s salvation
Establishment institutions deserve to be disrupted
Information (and your data) want to be free
Market value is the ultimate source of truth
These aren't just ideas, but they shape how decisions are made, products are built, and success is defined. And perhaps most importantly, these have become the scaffolding on which tech policies in the US have been built–even though I suspect you’d be hard pressed to find an American who has cast ballots in the hopes that these beliefs would dictate how our country is run.
I’ll briefly introduce each of these concepts here, and more in-depth posts on each will follow in the weeks ahead.
Men should be judged by their creations, not their credentials
In 2004, a 19-year-old Mark Zuckerberg famously dropped out of Harvard to pursue his fledgling company, Facebook. This was the perfect embodiment of Silicon Valley's meritocratic ideal: what matters isn't your degree but what you can build. Peter Thiel took this ethos to its logical conclusion when he established the Thiel Fellowship in 2010, offering $100,000 to young people willing to drop out of college and “build things” rather than “sit in a classroom.”
There's something democratic and appealing about this idea. But here's the paradox: this supposedly credential-free builder meritocracy has consistently favored people who had privileged access to technology when it was prohibitively expensive—predominantly young, white men2 from upper-middle-class backgrounds who attended elite institutions before dropping out.
The discounting of credentials and expertise in favor of builders has also meant favoring engineers whose approach to societal problem-solving may not be compatible with the needs of a democratic polity with diverse constituencies. As Christine Rosen put it, technosolutionism is not the answer:
Technosolutionism is a way of understanding the world that assigns priority to engineered solutions to human problems. Its first principle is the notion that an app, a machine, a software program, or an algorithm offers the best solution to any complicated problem. Notably, the technosolutionist’s appeal to technical authority, even for the creation of public policy or public health measures, is often presented as apolitical, even if its consequences are often not. Technosolutionism speaks in the language of the future but acts in the short-term present. In the rush to embrace immediate technological fixes, its advocates often ignore likely long-term effects and unintended consequences.
Devaluing wisdom in favor of “building” has had far reaching consequences: the ascendancy of tech founders in the public sphere over the last twenty years has contributed to the death of expertise, as Tom Nichols of The Atlantic has called it, and the hollowing out of that expertise in key federal government positions in Trump’s first term led Michael Lewis to write The Fifth Risk.
Optimization is a valorous end in itself
In 2018, a story broke that Amazon had patented a wristband that could track warehouse workers' hand movements in real-time. The device would vibrate to nudge workers' hands in the right direction if they reached for the wrong item, and it would record precisely how long it took to complete each task. This wasn't science fiction—it was the logical endpoint of a culture where optimization is seen not just as a tool, but as a moral imperative.
The optimization obsession traces back to Frederick Taylor and his stopwatch-wielding efficiency experts of the early 20th century, but Silicon Valley has taken it to an extreme. Amazon's fulfillment centers represent perhaps the purest expression of this mindset: every movement is measured, every second accounted for, every process continuously refined in pursuit of maximum efficiency.
Think about the language tech companies use. They don't just improve things; they "optimize" them. They don't solve problems; they create "optimal solutions." The engineering mindset—where everything can and should be measured, analyzed, and improved—has escaped the confines of technical systems and been applied to human behavior, social interactions, and even love (just look at dating apps' matchmaking algorithms).
But what happens when this optimization imperative collides with human needs? Amazon's warehouse workers have been increasingly replaced by robots because they developed repetitive stress injuries from maintaining the "optimal" pace – not to mention needing to take bathroom breaks or take shelter during a tornado. Driver-routing algorithms maximize efficiency but clog once-quiet neighborhoods. And it’s not just Amazon: YouTube's recommendation engine, like those of most social media products, is optimized for engagement but ends up promoting increasingly extreme content. And yet this same approach is the one Musk’s DOGE is explicitly taking with the federal government workforce.
Of course, if you’re an outsider to this ideology, it’s clear that not everything that could be optimized ought to be. Human communities, democratic deliberation, and cultural expression don't necessarily improve when optimized for efficiency. Some values—like justice, beauty, or dignity—don't lend themselves to algorithmic optimization at all.
Technology is humanity’s salvation
When Marc Andreessen proclaimed that "software is eating the world" in a 2011 Wall Street Journal op-ed, he wasn't just making an observation about market trends. He was articulating a worldview where technology, not politics or social movements, drives human progress, which he reaffirmed in his bizarre, self-published 2023 Techno-Optimist Manifesto.
This belief in technological determinism can be traced back to figures like Alan Kay, who worked at Xerox PARC in the 1970s and famously said, "The best way to predict the future is to invent it." It's a seductive idea: rather than getting bogged down in messy political processes, we can directly engineer a better future through technological innovation.
But a sincere wish to build a better future has morphed into something more millenarian. Consider Elon Musk's Mars obsession. When he talks about making humanity "multiplanetary," he isn’t just describing a scientific endeavor—he's talking about salvation. "History is going to bifurcate along two directions," Musk has declared. "One path is we stay on Earth forever, and then there will be some eventual extinction event... The alternative is to become a space-faring civilization and a multi-planetary species." Disruption of our very relationship with our home planet isn't just desirable; it's necessary for survival.
This savior mentality extends to the human body itself. Silicon Valley's obsession with longevity technologies, from Peter Thiel's rumored interest in young blood transfusions (memorably satirized in HBO's "Silicon Valley" with the character Gavin Belson's "blood boy") to Google's anti-aging subsidiary Calico, reveals a fundamental discomfort with human limitations. Bryan Johnson's Blueprint protocol, where he spends millions annually measuring and optimizing every bodily function to reduce his biological age, represents this drive taken to its logical conclusion. Sam Altman has reportedly invested in a company attempting to preserve brains for future uploading. When death itself is framed as a technical problem waiting for disruption, no institution is sacred.
What makes this technological salvation narrative so powerful is that it offers a comforting certainty in uncertain times: no matter how dire our problems—climate change, pandemics, political polarization, the fact that we as humans are all mortal—a technological solution awaits. This faith provides a convenient escape hatch from having to engage with messy social and political realities. Why fight for climate legislation when Musk will build electric cars and carbon capture machines? Why reform healthcare when we'll soon have AI diagnostics and CRISPR cures? The promise of technological salvation allows tech elites to position themselves as humanity's saviors while opposing anything that does not align with their financial interests. Meanwhile, the social contexts that produce our most pressing problems—inequality, environmental degradation, democratic erosion—remain unaddressed, their continuation ensured by the very belief that technology alone will save us from them.
Establishment institutions deserve to be disrupted
"Move fast and break things." This mantra, embraced as Facebook's official motto until 2014, perfectly encapsulates Silicon Valley's attitude toward existing institutions. The word "disruption"—originally an academic term from Clayton Christensen's theory of innovation—became the battle cry of tech entrepreneurs eager to overthrow established industries and institutions.
The disruption narrative reached its logical conclusion with figures like Balaji Srinivasan, who delivered a notorious speech titled "Silicon Valley's Ultimate Exit," advocating for people to "exit" traditional society altogether and build new digital nations. This isn't just theoretical—it translates into active resistance against regulation and democratic oversight.
Consider how Uber deliberately flouted local taxi regulations as it expanded globally, essentially betting that its popularity with consumers would force regulators to adapt. When pressed about the company’s regulatory concerns, Travis Kalanick famously responded, "We're in a political campaign . . . The candidate is Uber and the opponent is an asshole named Taxi." Or how crypto advocates have promoted their technology as a way to bypass central banks and financial regulators altogether, with Bitcoin's genesis block containing a London Times headline about bank bailouts as a built-in critique of the financial system. And of course, the DOGE dismantling of entire swaths of the federal government that they’ve made no effort to try to understand is just the latest and most extreme example of this drive to disrupt no matter what.
What's forgotten in this disruptive zeal is that many "disrupted" institutions emerged to solve genuine social problems, and their inefficiencies keep them in balance. The institutions we've built over decades may be imperfect, but they often embody hard-won social compromises that disruptors dismiss at society's peril.
Information (and your data) want to be free
"Information wants to be free" became a rallying cry for early internet culture after Stewart Brand popularized the phrase in the 1980s, but the belief has deep roots in early hacker culture, perhaps going back as far as MIT's Tech Model Railroad Club in the late 1950s—one of the birthplaces of hacking. This belief is behind admirable projects like Wikipedia and open-source software, but also behind more sinister ones like Mr. Deepfakes.
In practice, this freedom has been selectively applied. The same companies that benefit from users freely sharing their personal information zealously guard their algorithms and data. Google wants your search history to be "free" for them to use, but their ranking algorithm is a closely-held secret. And "free" information often means unpaid labor and devalued creative work, as everyone from actors to musicians to the New York Times have found in the age of generative AI.
The evolution of Meta illustrates this contradiction perfectly. What began as a free way to connect with friends transformed into a sophisticated surveillance machine where users "freely" provide valuable data that is definitely not free when Facebook sells ads, and now, that data, along with pirated copyrighted materials, is being used to train their generative AI system, which is offered to the public . . . for free.
In reality, information is the raw material of this particular builder culture. Engineers who are raised in this ideology and who work at today’s tech titans aren’t working with cobbled together circuit boards ordered from a catalog any more, but rather more and more sophisticated ways of crunching as much data as possible to improve predictive algorithms.
Market value is the ultimate source of truth
Despite their countercultural trappings, Silicon Valley elites have embraced perhaps the most conventional belief of all: market fundamentalism. "The market has spoken" serves as both explanation and justification for outcomes, no matter how disturbing.
The billions in capital that AI companies with questionable models of future revenue have attracted have been used in turn to justify the alleged value that their products will provide to humanity. The smoke will produce fire, rather than the other way around. When Elon Musk points to Tesla's market capitalization as evidence of its value to humanity, he's invoking this belief. The circular logic is striking: a company's stock price reflects its true value, and its true value is reflected in its stock price.
Yet markets routinely fail to price in externalities like environmental damage, social division, or privacy violations. Lately, it has become clear how much of Tesla’s market cap is derived not from its future value as an automaker – especially when compared to, say General Motors or Ford – but rather the Edisonian myths Musk has spun for himself. Meta’s market value says nothing about its impact on democratic institutions or teenage mental health, save to reflect that to date it’s largely been exempt from any accountability, much like Purdue Pharma prior to the first opioid settlements.
By equating market value with social value, tech elites neatly sidestep challenging questions about their products' broader impacts. And like adherents to a prosperity gospel, they believe their massive wealth is not only their reward for adherence to the beliefs described above, but the ultimate proof that they are in the right.
From ideology to myth
What makes this ideology so powerful—and so dangerous—is that it contains partial truths. Credentials can indeed be exclusionary. Technologies have improved and saved human lives. Some institutions do need reshaping. Efficiency can improve both business and government. Information sharing can foster innovation. Markets do often allocate resources effectively.
But each of these beliefs becomes problematic when taken to an extreme and divorced from broader values and contexts. The ideology elevates means (technology, markets, disruption) over ends (what kind of future we want to build).
As we face converging crises, the limitations of this ideology have become increasingly apparent. We need technological development guided by laws and a broader view of American values rather than billionaire whims. We need recognition that markets, while powerful tools, cannot substitute for collective decision-making about social priorities. And we need to reclaim the concept of innovation from those who have used it to advance their own interests at society's expense.
In my next post, I'll unpack how Silicon Valley has not only co-opted but fundamentally transformed our understanding of innovation itself—creating a powerful mythology that equates technological change with progress while obscuring the question of who truly benefits from this particular vision of the future.
I fell for it myself, back in 2014, when I left working in foreign aid to take a job at Amazon, where I thought I could do well by doing good (and pay off my student loans).
I refer only to men above purposely — in my observation, women rarely get credit for either credentials or creations in this culture.