AI uncertainty and the challenge of problem selection

table of contents
Down arrow

This article originally appeared in Forbes.

Never before in history has choosing what to work on been more confusing for founders than it is today. Proclamations about the future of AI have been both bold and measured. Dario Amodei, CEO of Anthropic, believes that most people underestimate both the upsides of AI and the risks. His peer, Sam Altman, CEO of OpenAI, is more bullish in his claim that we're on the cusp of building artificial general intelligence (AGI)—AI with the ability to mimic the cognitive abilities of the human brain.

The truth is that we don’t know what the future of AI will be. And while founders try to decipher what to work on, venture capitalists and other private capital dealmakers face a similar dilemma: evaluating which investments will ultimately become long-term, durable successes.

The questions around AI’s trajectory pose a genuine challenge because a lot of problems that a startup tries to solve today might become entirely irrelevant, depending on what you assume about the evolution of AI or the timeline toward achieving AGI. This uncertainty greatly complicates problem selection for founders and deal viability for investors. No one has a crystal ball that can definitively predict what will happen next in this rapidly moving space.

Choosing AI-agnostic problems

With so much that remains unclear, a rigorous strategy to problem selection should be agnostic to the timeline of AI advancement. The strategy shouldn’t care whether AGI happens in three years, five years, 10 years or never within our lifetimes. If AI progresses quickly, that benefits the company that's solving the problem. But if it never happens, there's still a valuable business to be built solving a still-worthwhile problem.

So, what kinds of problems are agnostic to the rate of AI progress? The closer a problem is to a core, fundamental need of a consumer or business, the "safer" it is. In contrast, the more levels of abstractions away (for example, solving the problems of someone who reports to someone who reports to a department who reports to the CEO), the riskier working on that problem is.

As the world approaches the horizon of AGI, it would be misguided to choose problems that are predicated on the assumption that the job of any particular person still exists.

The fundamental needs of consumers and businesses

Finding AI-agnostic problems to solve in B2C is somewhat easier. Many consumer problems are, by definition, fundamental because it’s hard to get too many levels of abstraction away from their core needs. For example, consumers need to eat, get from point A to point B and get medical help when needed. Although the solutions to these problems might radically change with AI, the problems themselves are unlikely to.

In B2B, and particularly SaaS, things are more complicated. The core, fundamental needs at a business level generally fall into two categories:

1. Directly delivering (i.e., automating) the labor output of a core function.

2. Delivering what goes into the cost of goods sold.

Most B2B SaaS solves a human problem faced by a user, which assumes the job of the user exists. The long-term challenge is that as AI is able to do more and more of the work, it makes less and less sense to buy software that serves humans. Instead, companies will begin buying an AI worker that does the job directly.

Shifting the SaaS mindset toward AI workers

My prediction is that as we get closer to the event horizon of AGI, every company currently selling software to a job or department will feel "convergent pressure" to pivot into delivering the AI agent that does the job (or the entire department) itself. This or risk being obviated by selling software tools to a shrinking user base.

Because there will be fewer and fewer seats to sell to, seat-based SaaS revenue will make increasingly less sense. Rather, we’ll see companies begin to charge for the consumption of or value created by AI workers.

When this happens, there will be competition among players who previously didn’t compete. The question becomes: Which companies have built a strong enough advantage, moat or head start to be the winner in their respective AI worker category?

A smart, long-term aspiration for any SaaS company today is to eventually automate the job or department they serve. That's the most AGI-agnostic version of the problem they might eventually be forced to solve. This isn’t a vision that needs to be acted on immediately—and, frankly, it probably isn’t technically feasible for most jobs yet—but it's both prescient and honest to hold the vision as the eventual goal.

Founders and executive teams (and the VCs who fund them) should prepare for this future by starting to consider how their short-term strategy will eventually accrue the advantage for their company to become the winning AI player in its space.

author
Ray Zhou
Co-Founder
posted in
share this