For centuries, the logic of innovation was straightforward: act early, move fast, and learn faster. First movers, from the steam engine to the internet, historically built the foundational infrastructure that defined their eras.
However, in today’s rapidly accelerating technological landscape, particularly with the advent of AI, this conventional wisdom is being challenged. Building on current infrastructure now carries the significant risk of obsolescence, burdening innovators with outdated code and misaligned learning.
Strategic patience, for the first time, may prove more advantageous than speed, suggesting an inversion of the traditional innovator’s logic.
Historically, action has consistently been rewarded. The conventional view holds that markets favor the imperfect builder who iterates publicly over the contemplative observer. Early pioneers, despite using crude tools, established unassailable advantages through continuous iteration and scale. As Clayton Christensen, HBS professor, argued, learning by doing compounds, and the early operational struggles were precisely where dominant market positions were forged.
Yet, this conviction — that action guarantees dominance — is rooted in a linear world. Past industrial revolutions progressed on stable, incremental foundations. Today, however, founders build on constantly shifting ground, with evolving neural architectures and automated infrastructures that improve even during deployment.
This unprecedented velocity creates a critical strategic conflict: the technological lifespan of foundational platforms is now demonstrably shorter than most product development cycles. In an 18-month development cycle, underlying AI models or genomic tools could be superseded multiple times. Consequently, “learning by doing” can quickly become technical debt, as capital, time, and talent spent optimizing for temporary limitations compound faster than business growth.
In this new age of acceleration, a new governing principle emerges: the slope of improvement determines everything. As Eric Schmidt emphasizes, the slope in AI and advanced biotech is not merely steep, but hyper-exponential and self-reinforcing, with each advance unlocking new data and accelerating subsequent capabilities. The advantage now lies not in early initiation, but in perfectly timing the entry onto the right slope, where foundational capability, affordable compute, and data maturity converge.
Furthermore, this steepness is increasingly driven by a concentrated few platforms. Innovation is no longer a decentralized phenomenon; it is largely dictated by a handful of foundational platform architects, such as OpenAI and Google DeepMind in AI, or major cloud-based biofoundries. For most other companies, the true strategic determinant is the velocity of these external platforms, rendering internal roadmaps strategically irrelevant if they cannot match this pace.
This dynamic creates a new innovator’s dilemma: the risk is not just competition catching up, but that accumulated knowledge will not transfer forward. Optimizing for current limitations, such as a specific context window or a protein folding algorithm, can lead to skills and codebases fundamentally incompatible with future, superior models.
This unique dynamism presents a precise danger zone: starting too early risks building on a foundation that will quickly become redundant. The cost of not waiting is often the irrecoverable expense of optimizing for a platform on the brink of obsolescence. Conversely, starting too late might result in the core technology being locked down by platform giants, leaving no room for a startup to establish a defensible niche.
For investors, this dilemma reframes due diligence: speed and founder hustle are no longer sufficient signals of defensibility. Timing, slope analysis, and adaptability become core variables.
This brings us to the power of “Active Patience,” which minimizes the risk of obsolescence and misaligned learning. The argument for waiting rests on the idea that the slope of improvement in Deep Tech is becoming so steep that it fundamentally alters the value of early action. When tools improve exponentially, building on today’s infrastructure risks being “stranded tomorrow” with obsolete code and skills.
Strategic patience maximizes capital efficiency and effort, ensuring investments yield maximum, non-obsolescent returns. By timing entry along the improvement curve, capital compounds faster, generating greater competitive advantage. This approach also allows for building durable and defensible moats by aligning with established standards, preventing stranded code and costly system rewrites. Focus shifts from rapid iteration to precision and flexibility, building moats based on final platform alignment and the ability to be, to the extent possible, model and even platform-agnostic. Active patience also offers strategic clarity and risk mitigation, allowing innovators to discern clear patterns amidst market hype, de-risk ventures, and time convergence precisely when compute, data, and biology synchronize.
While the shift from speed to timing is real, founders and investors must also recognize a third possibility: the innovation slope may plateau. Evidence across AI and biotech suggests current architectures, data supplies, and compute economics could be reaching diminishing returns as the largest players play an increasingly partake in a closed capital loop, where funding, data, and infrastructure circulate within the same few entities, reinforcing their control while crowding out early-stage, independent innovation. Eventually, model scaling might show flattening gains, while hardware and energy constraints may slow iteration.