Originally published here.
As a venture capitalist, three sayings have shaped the way I look at my work. They helped to give me the stomach for it, teaching me to take investing very seriously on the one hand, while also holding it lightly enough to take the risks required to back outlier companies. Now, they are utterly failing me and our entire industry.
The first one was: “Do not be a surgeon who cannot see blood.”
A friend told me this after I had anxiety over losing my first million-euro investment in a company that failed. He remembered being in my shoes years earlier, spiralling, questioning his judgment, his future in the industry, his self-worth. Reflecting on his own learnings, his advice was blunt: losing companies is part of the quest to back rare outliers. If you cannot watch money die, you should not be in this job. The task is to pull yourself together, learn from mistakes, and get back to hunting the next outlier.
The second one was similar. A well-known investor I once worked with said to me: “Whatever correction the market brings, fear cannot have a seat at the table.” He meant that no matter what happens in the market, no matter how brutal the down-rounds or write-offs, we will not let a downside mindset colonise our investment meetings. Tomorrow is always day one. Any new company could be the next generational outlier. Fear is not allowed in the room because it will cloud your view of the upside.
Then came the third saying, the one every young investor hears sooner or later: “We are not saving lives here.” Unlike doctors or fire fighters, the story goes, we are merely stewards of capital in a system that is already quite forgiving of mistakes. It does not matter if you make twenty bad investments, as long as you make one outstanding one, because the returns from that one outlier will more than compensate for the others. “We are not saving lives” is an invitation to take more risk, to focus on upside above all else. In the worst case, you only ever lose money. Never a life.
All three sayings teach a similar lesson: cultivating emotional resilience is important for driving outsized success in this industry. For a long time, I found comfort in that. It helped me push through tough decisions, tolerate volatility, and avoid paralysis. It allowed me not to have a knot in my stomach when we decided on a restructuring of a startup that meant real humans were packing their desks into cardboard boxes.
Increasingly, I look at the venture ecosystem and cannot help but think that we are now getting this all wrong.
“We are not saving lives? Yes, no shit. We are wrecking them.”
The old mantras were meant to give us more risk appetite. But if you zoom out to the world we are actually building nowadays, the stakes look very different to what they were ten or twenty years ago. At the scale we now operate at, with technologies like AI and platforms that mediate the attention, work and relationships of billions of people, the claim that “we are not saving lives” starts to feel both inaccurate and irresponsible. The products we back shape mental health, political discourse, job security, even the factual stability of whole societies. Perhaps we should be wanting to save lives here. At the very least, we should admit that we are fully capable of harming them.
The things we were chasing
For a long time, we measured the success of our industry in a few ways.
First and foremost, we sought to maximise the total return of an investment. Since Milton Friedman showed us the way in the 1970s, maximising shareholders value has been the bible of any good capitalist.
Second, we cared about the net benefit a product offers society. The classic example: pacemakers save the lives of countless heart failure patients. The idea was that founders build products to solve problems people have. VCs sought to differentiate the “pain killer” from mere “vitamin” products and in starting a company, founders permanently asked themselves if they were solving real “hair-on-fire”-problems.
Third, we valued the number and quality of jobs created in the tech sector. “Look at how many great jobs we created” is one of the slides you could always find in any tech ecosystem report. Tech companies displayed the number of employees that were joining them in internal leaderboards. I took pride in knowing that a large part of our investment into a company went into hiring strong talent.
Lastly, we emphasised the positive flywheel hugely successful companies create: employees and founders who cash out subsequently become angel investors, repeat founders, and mentors. Capital and experience flow back into the ecosystem and hopefully produce even more ambitious companies. This is the promise of Silicon Valley and venture capital: a perpetual motion machine for innovation and prosperity.
For a while, these were the stories we told ourselves. Like mythical folklore, they became an integral part of the VC narrative, part of manifestos, news headlines and social media posts. They were mostly true, or at least not entirely false. Some joined the industry because they believed in most of them. Others joined the industry because they believed in only one of them.
The new anatomy of outliers
Now, in the early days of the AI disruption wave, I find myself increasingly wondering whether we still have the right frameworks, or even the right instinct, to scrutinise the benefits and harms of the companies we back.
The maximisation of shareholder value is working exceptionally well. With the combined market cap of the Magnificent Seven at around $20tn and a global GDP of $118tn in 2025, large tech companies are the de-facto gravitational center of our world. Each new technology wave is concentrating wealth in a shrinking number of companies and, within them, in a shrinking number of individuals.
Many of us await the one-person billion-dollar AI company like a Messiah, proof that the techno-capital machine has reached its final form. We are chasing to build extremely lean tech companies, with the new normal being two to four million dollars of revenue per employee. But as a consequence of ever smaller amounts of people working there, AI companies may lack the broader ecosystem flywheel we saw during prior waves of technology platform shifts.
Moreover, as these companies stay private for much longer, they remain inaccessible to many people outside the VC industry. Fewer than 15 per cent of US companies with more than 100 million dollars in revenue are publicly listed, which means the majority of sizeable firms are only accessible to professional or very wealthy investors. What is more, the number of publicly traded US companies peaked at about 8,000 in the mid-1990s and has fallen to just over 4,000 today, despite booming markets and recent IPO activity1.
Returns on both capital and labour are being concentrated in the hands of fewer and fewer people, while inequality is rising across the globe. Since the 1970s, returns on capital have steadily pulled away from returns on labour, hollowing out the social contract. One way out could be much broader societal participation in technology-driven wealth creation, so that both wages and capital incomes grow.
Yet for now, due to ever stronger wealth concentration, we are moving in the opposite direction. Apart from shallow references to “Universal Basic Income”, there is almost no compelling narrative, or responsibility taken from within the VC industry, for how the AI transition might play out in the real lives of people who do not own equity in the winning firms.
For all the countless manifestos, glossy launch videos of AI products, and utopian AI opinion pieces, there is little societal conversation about the future that awaits us, and an honest reflection about the role that founders and capital allocators play in it. More and more VC-backed products are void of any serious attempt to contextualise impact in critical areas such as labour automation or AI-generated content. Even the most resourceful companies avoid critically reevaluating their role in the ecosystem. Meta launches slop apps and dodgy AI chatbots, without even trying to offer a serious narrative for how any of it is a net benefit to society. The absence of a responsible narrative from leaders is not neutral: It is a choice to treat humans as eyeballs, rather than citizens or neighbours.
As an industry, we seem to have shifted from measuring success by how many jobs we create to how many we automate. We are not really building AI agents to replace software, we are building them to replace labour. Every second pitch these days is basically “how an AI agent can automate the most jobs in the shortest time”. Humans-in-the-loop still play a role because the technology is not fully capable yet, not because there is any deep sense of responsibility about how AI should be rolled out.
The few efforts that try to wrestle with this, like Anthropic’s work on economic research, or initiatives such as AI 2027, are too often instantly dismissed as decelerationist, regulatory capture, or fearmongering. We rarely pause to consider that many of these people could simply be feeding the same capitalist machine, quietly, lucratively, and without the social friction that comes with raising inconvenient questions.
Lord Acton once wrote: “Power tends to corrupt, and absolute power corrupts absolutely.” We like quoting this about politicians and dictators. We rarely apply it to ourselves: the 35,000 people in the global venture community and the many more who build, scale, and monetise the technologies that will reshape work, politics and culture within a single generation.
Capitalism as engine and excuse
Up until now, capitalism has still been the best recipe for success the modern world has found. For all the justified criticism of inequality, it remains true that because of it, over the past decades global poverty fell dramatically. Countries like China, Indonesia and the Philippines lifted hundreds of millions of people out of poverty through growth. Capitalism was the engine that made that possible.
On the one hand, those who genuinely care about making the world better should care about economic growth and acknowledge its benefits. I do not know an alternative model that has scaled as effectively at improving livelihoods globally.
On the other hand, I very strongly feel the capitalist values that have served the tech industry well, at least well enough to turn it into a vast growth motor for the global economy, now require an update. The combination of extreme technological leverage, winner-takes-most dynamics, and institutional fragility, creates a situation that requires our industry to adopt new guiding principles or else risk the ruin of the very ideals it once stood for.
Instead of clinging on to old truisms and belief systems, like a reptile, the VC and technology world needs to outgrow its old skin, leaving it behind in a rumpled heap of dry scales, an artefact of a great period, but now fresh, glowing and ready to master new quests.
The stories we tell ourselves to sleep at night
When I discuss these thoughts with fellow investors, the same few arguments always come back, the kind that well-meaning people tell themselves so they can keep doing their jobs and not get distracted with meaningless “woke stuff”.
First: “The market is the market.” These jobs would disappear anyway. We are simply accelerating an inevitable correction. If we keep them alive artificially, we distort the market and tie up human and financial capital that could be more productively deployed elsewhere.
There of course is truth in this. Markets do correct. But “the market” is not an act of God. It is the emergent outcome of millions of decisions, many of them made by people in rooms like ours. When we say “the market did this”, we often mean “we chose this, collectively, and would prefer not to dwell on the emotional weight of that choice”.
Rather than framing our actions as zero-sum - where we either build the product that replaces human labour or we don’t - I think we should reframe them as positive sum. If we build the product that replaces labour, what are other things we need to be doing at the same time?
Second: “Better us than somebody else.” At least we will do it responsibly. At least we will manage the disruption thoughtfully, treat people well, and redirect the gains to good causes.
Sometimes this is sincerely meant. Often it is at least partially true. But “better us than them” is still a story in which the primary goal is to preserve our licence to keep extracting value. We’re sitting with the terminally sick whose inheritance we want to claim, but do so with a slightly nicer bedside manner.
What is the hard evidence in our own actions that is proof that we are the better stewards of disruptive change than others? What is our denominator of good? How do we define human flourishing?
Behind closed doors, most VCs still blindly chase the maximisation of shareholder value, rather than embedding it in a set of conscious choices that will create real trade-offs.
Lastly: “People will adapt.” They will retrain, learn new skills, move to where the jobs are, or take the jobs we have not been able to fill, like care work. Human beings are incredibly adaptable. The Industrial Revolution was brutal and we got through that, too.
Again, there is some truth in this. People do adapt. But saying “people will adapt” from a position of relative security is different from being the person asked to adapt while raising children, caring for a sick parent, or navigating a fragile democracy that is itself struggling to adapt to AI-generated information overload. And adapting over the course of many decades is different from from what is shaping up to be the most rapid change humanity has ever seen.
If need be, humans can be incredibly resilient - let us never underestimate ourselves. We have survived through wars and famine. But the truth is still that we have no idea how fast people and institutions can realistically adapt to the speed of change we are unleashing. We are running an experiment on billions of lives with very little informed consent and so far are doing little to help translate the impact of what is coming to those who may be most adversely affected by it.
Inviting fear back into the room
The whole industry has been hiding behind the smoke screen of “solving real problems” while relentlessly optimising for capital maximisation. I say this as someone who has benefited from and co-championed that system. AI now accelerates everything, deepening the moral fault lines that have so far been barely visible, and cracking us all wide open in the process.
This brings me back to those sayings that raised me as an investor.
“Do not be a surgeon who cannot see blood.”
“Do not let fear have a seat at the table.”
“We are not saving lives here.”
I understand what they were trying to protect. You cannot do this job if you collapse when a company fails, a market turns, or new platform shift takes shape. Emotional resilience matters if you want to keep championing the upside of technology. But detachment is not the same as resilience. There is a point at which training yourself not to consider and not to feel becomes a form of moral injury.
I am starting to think that fear should have a seat at the table. Not the panic that paralyses investment decisions, but the unease that asks: “Who pays for this efficiency? What happens to the people on the other side of this product? If my kids lived in the median household my portfolio is disrupting, what would I want us to do differently?”
In German, there is a powerful word that captures the consideration I think is needed to meet this moment: Ehrfurcht. It consists of the words Ehre (honour, dignity) and Furcht (fear), describing a reverent fear that arises from honouring something greater than oneself. In the wake of ever more powerful technology, Ehrfurcht needs to have a seat at the table.
The question for our industry is not whether we keep backing ambitious, world-changing companies. We will, and we should. The question is whether we can do it with our eyes and hearts more open.



