Originally published here.
I have been thinking a lot about AI dichotomies recently. That is: in what ways does AI progress amplify opposing forces along the same spectrum? Rather than confusing us as contradictions, these opposites help us to frame the edges of how AI will reshape the human experience. In the following article, I explore four core dichotomies that affect how we think, build, and feel our way through the AI transformation.
Dichotomy 1: AI Dystopia or Utopia
The most obvious dichotomy is also the most general one - that of AI as a dystopia or a utopia. Both camps elicit fervent believers almost evocative of a new religion, born from the God-like power AI has over the fate of humanity.
AI dystopia
On the dystopian end of the AI spectrum sits a group often dismissed as “AI doomers”. They see little upside in the trajectory of AI development for humans. Rather, they are driven by the belief that AI poses an existential threat to humanity, or, in a less extreme view, that it will significantly amplify existing inequalities. They fear that powerful AI will concentrate resources in the hands of a few, with everyone else fighting for scraps or sedated by AI-generated content so as not to notice the harm that is being done to them.
This end of the spectrum is maybe best exemplified by Eliezer Yudkowsky, often considered the Godfather of the rationality community. This community is concerned with the existential risks stemming from advanced AI systems, and how to ensure these systems behave in alignment with human values. Yudkowsky’s latest book is titled “If anyone builds it, everyone dies. Why superhuman AI would kill us all”, leaving little ambiguity about his forecast for humanity as a whole.
AI utopia
On the opposite end of the spectrum sit the AI maximalists: those who believe that artificial intelligence will not merely improve the world, but fundamentally save it. This worldview is perhaps best exemplified by venture capitalist Marc Andreessen, who regularly argues that “AI is making everything we care about better” and whose firm a16z published the techno-optimist manifesto. The logic is simple: that AI drives productivity, which leads to lower prices, higher wages, and economic growth. AI drives the ultimate upward spiral of prosperity because humans always want more things and AI will enable us to make those. This camp also believes that technology can drive most optimal outcomes when it is left unchecked by regulation and that free markets are the best mechanism for innovation.
Similar to the hardcore beliefs of the rationalist community, the utopia camp has attracted a set of avid believers, loosely affiliated within the effective accelerationism movement (e/acc), a community born on Twitter/X that wants to accelerate AI progress, champions open-source models and rejects concerns around safety or existential risk as overblown fear mongering.
These two worldviews are each driven by very smart people, yet operate with perfectly opposing logic: one sees unchecked AI as the fastest path to ruin, the other as the only path to salvation.
Dichotomy 2: The positive or negative human interaction effect
Another compelling question is whether, in the face of increasingly powerful AI, defined here as AI that is better than most humans at most things, human touch will still command a premium on experiences, or whether the opposite will be true - that fully automated machine experiences will command a premium and human touch will incur a penalty. I have come to think about this as the positive or negative human interaction effect. Consider two examples: Wine recommendations and robotaxis.
Positive human interaction effect
At one end of the spectrum lies the positive human interaction effect: the presence of a human amplifies the perceived quality of the experience. Here, human touch functions as an enhancer. For example, consider an A/B test in which guests at a restaurant receive wine recommendations either from a human sommelier or an AI sommelier. The recommendation content is held constant, i.e. the same wine is recommended and described with the same attributes. The wine recommended by the human sommelier would likely nonetheless be perceived to taste better and pair more perfectly than the wine recommended by the AI. This is not due to objective differences in the product (the recommendation by the human and the AI was the same), but to the moderating role of the human source on the guest’s subjective interpretation of the wine taste. The effect reflects attribution bias and social cognition: people infer care, intentionality, or expertise when a human is involved, particularly in domains where emotional nuance or meaning play a role. Most sommeliers operate with passion, elegance and finesse, thus offering a positive human baseline - they are additive to guest experience.
Negative human interaction effect
At the other end of the spectrum lies the negative human interaction effect (or reframed: positive machine interaction effect), where human involvement actively detracts from the experience. In these cases, human touch is perceived as negative, something that adds friction. Consider autonomous vehicles in San Francisco: riders now routinely pay more to ride with Waymo than with a human driver via Uber or Lyft, even though the route and outcome are identical (you get to your destination on time). The appeal of robotaxis lies in the absence of small talk, social ambiguity, and variable service quality. Machines are seen as cleaner, safer, and more desirable precisely because they remove the human. Arguably, (too many) ride share drivers have a negative human baseline - on average, they take away from guest experience.
These two poles describe a spectrum of human interaction value: whether human involvement enhances or diminishes the perceived value of a given experience. As AI systems match or surpass human performance across domains, the central question is no longer whether humans can do something better. It is whether the presence of a human makes an experience feel better or worse.
Dichotomy 3: Powerful AI makes human intelligence either meaningless or even more meaningful
In the age of AI, human intelligence is standing at an interesting fork in the road. It will either become so meaningless that we stop noticing it altogether, or it could become even more central than ever before.
Human intelligence becomes meaningless
Light used to be a luxury. Referring to Slater Stitch’s brilliant example in Automate or Die, in 1800, keeping a single 60W-bulb-equivalent running for a year would have cost £70,000. The average household couldn’t afford more than a few hours of candlelight per evening. In 1712, the Duchess of Montague reportedly spent the equivalent of $6 million (today’s money) on candles for only one night. Guests at lavish events counted candles the way we might count Instagram likes today.
Now, light is just... there. When was the last time you counted the number of candles or light bulbs around you? Light is so cheap and abundant that we barely notice it. Consequently today, nobody brags about their lighting anymore.
With AI, light is today what intelligence may become in the future. It will be quite weird to talk about how “smart somebody is” - or to really think about it much at all. Like complimenting the lighting in a room - not really wrong, just irrelevant. Weird that you would pay attention to that. Not something you would really ever think about.
Historically, intelligence was expensive and scarce: in time (studying) and in the uneven genetic distribution of IQ. Smart people were high-status because they could understand and synthesise complex information, allowing them to make better decisions. The rest of us admired, envied or even deferred to them.
As AI systems become vastly more intelligent than any of us, the differences in human intelligence that matter so much today may start to feel negligible. Once everyone has access to real-time AI assistants that correct, complete, and elevate their thoughts, intelligence may stop feeling scarce. Everyone will sound smart. Everyone will have perfect memory, citation, and summary at their fingertips. Differences in raw brainpower will remain, but they may matter as much as differences in lighting do today: Barely noticeable. You might be the dumbest person at the dinner table, but it won’t matter as nobody would really notice.
Human intelligence becomes more meaningful
But what if the opposite happens? What if human intelligence becomes even more important because of both how well and how much intelligent people use AI? In this world, AI doesn’t flatten the differences in human IQ , but further amplifies existing differences. The smartest, most curious, most driven people adopt AI earliest and most effectively. They use it not just to get things done, but to compound knowledge, sharpen reasoning, and expand across disciplines. They start businesses faster, publish better papers, learn faster and think more clearly. They all become superhuman polymaths by entering a positive feedback loop that compounds over time: AI makes them better, so they use it more, and so it makes them even better. In this scenario, a new flywheel of intelligence could emerge:
Smart AI adopters outpace
Dumb AI adopters, who still outpace
Smart non-AI adopters, who in turn outpace
Dumb non-AI adopters, who fall further behind every year.
Both of these futures start from the same point: AI becomes the stage on which all thinking happens. The question is whether human intelligence will still have main character energy, or quietly fade into the background.
Dichotomy 4: Intelligence-full or intelligence-insatiable
A final dichotomy in understanding AI's impact lies in how intelligence-hungry a task or domain truly is. Not all problems are equal in their appetite for cognition.
Intelligence-full
Some tasks are intelligence-full, i.e. they reach a performance ceiling beyond which additional intelligence offers diminishing returns. These are domains where the intelligence need plateaus: once a task is solved at a sufficient level of reliability, accuracy, or speed, more intelligence adds little marginal value. Examples include optical character recognition (digitising handwriting), invoice reconciliation, payroll automation, or compliance analysis. Once the solution is good enough, smarter systems do not offer additional returns. Often, intelligence-full tasks correlate with domains where both the number of distinct problems and the complexity of those problems are finite and stable over time. These tasks are bounded - they do not continuously generate new edge cases, nor do they benefit from increasing levels of cognitive sophistication once a certain threshold is met. E.g. payroll does not need to happen more than once per month per employee, and once handwriting recognition is solved it is solved for good, as handwriting does not grow more complex over time.
Intelligence-insatiable
Other domains are intelligence-insatiable. These are problems where the appetite for intelligence is unbounded and scales with supply. The creative industries offer a clear example: video game design, storytelling, visual art, or advertising. As models become more capable, the bar for what counts as great content rises accordingly. In the 1920s, a silent film could captivate millions; today, audiences expect cinematic universes with multi-layered plots and incredible effects. The same is true for marketing campaigns, digital content, and world-building in games. The more intelligence we have at our disposal, the more novelty, complexity, and personalisation we demand.
Intelligence-insatiable domains often correlate with uncapped human desire, where progress can never truly fulfil demand but instead creates new reference points. As Chris Paik has argued, many of the most successful products map to the Bible’s seven deadly sins, revealing the bottomless appetites underpinning the human psyche:
Instagram maps to vanity: people always want to look more beautiful.
Uber to sloth: people always want to move with less effort.
Bitcoin to greed: people always want more wealth.
Tinder to lust: people always want more sexual stimulation.
Uber Eats to gluttony: people always want tastier food.
Twitter to wrath: people always want to argue and feel right.
These are domains where higher intelligence may make the product not just eternally better, but eternally more addictive, because the underlying need is never fully satisfied.
This dichotomy shows that intelligence demand curves differ across domains. In some areas, demand increases linearly with intelligence, such as in the creative fields, or legal reasoning, where better models unlock incremental improvements. In others, the curve is exponential: a threshold is crossed and entirely new capabilities emerge, as in drug discovery or scientific simulation. And in yet others, the curve flattens: tasks are solved once and for all, and additional intelligence no longer moves the needle.
Which future will it be?
So which one will it be? Does AI make human intelligence boring or more important than ever? Will we live in AI utopia or dystopia? Will human touch be positive or negative? Will we face intelligence fullness or become insatiable for more? I’m inclined to give an answer along the lines of “the truth lies somewhere in the middle”. But clarity typically doesn’t start in the middle. It starts at the edges, in naming the extremes so we can navigate between them.


