A seasoned cab driver knows every shortcut in the city, every traffic pattern, every dead-end street. But give a college student a smartphone with GPS, and suddenly, decades of driving expertise feel less critical. What happens when this same dynamic plays out in industries like healthcare, law, or finance? I’ve lately been struck by how software and AI are rewriting the rules of expertise, challenging the notion that mastery must be earned through years of experience. Over the past few decades, we’ve observed software companies dramatically transform industries, in a phenomenon Marc Andreesen famously described in his essay: “Why software is eating the world”. Much has been written about what makes software so effective and so disruptive: low upfront capital requirements needed to produce and scale a product to massive proportions, leading to the ability to disrupt established incumbents with little more than a good idea and a few lines of code.
But something I think doesn’t get talked about enough is the role of (or lack of) expertise required to start and maintain an competitive advantage. Unlike many businesses in traditional sectors, software and its related disciplines (I’m going to lump AI into this) are so new that the knowledge base required to execute is less deep – such that expertise hasn’t had enough time to entrench itself as a barrier to entry. What’s more, the recent innovation of LLMs play a major role in decentralizing and externalizing expertise. This creates an unusual dynamic where a person with relatively little experience (e.g. a fresh college graduate with a big idea) can outperform an incumbent with decades of experience in professional disciplines where this has never been possible before.
The Importance of Expertise is Proportional the Duration of Knowledge Accumulation
To put things into historical perspective, computer science emerged as a discipline in the 1960s. The internet started to take shape in the 1980s and really came into fruition in the 1990s. And the tools serving as the foundation of modern web stacks today—like React and Kubernetes—were only released in 2013. Similarly, the term ‘machine learning’ was coined in 1959, and the current AI boom is largely attributed to the release of the popular transformer architecture behind OpenAI’s ChatGPT, with the publication of the landmark Attention is All You Need paper in 2017.
This may not be an entirely fair comparison, but let’s briefly contrast this against other fields of study. The formal study of mathematics is thought to have its origins around 3000 BCE with the advent of Sumerian number systems, and has cumulatively grown in breadth and depth to form an atlas of interconnected specialities. This atlas has grown so vast that historians consider it unlikely that anyone can become a true mathematical generalist in the modern era. Similarly, the study of medicine is thought to have begun in 2600 BCE and has since developed over five millennia to form a wide-ranging set of medical specialties. Highly specialized fields like neurosurgery can require up to 14 years of training after graduation from college, before one is able to practice.
I think it’s fair to say that most of what we know about computer science, the internet, and machine learning has been developed within a single lifetime. This creates a strange but uniquely powerful situation: in these fields, the “experts” are still figuring things out, and newcomers have a much easier time catching up or even surpassing them. In other words, a lack of a significant expertise gap works to increase the rate of innovation.
Expertise as a Barrier
This matters because expertise is a powerful moat. Expertise is often weaponized by older incumbents to shut down promising upstarts. Everyone’s seen the trope: the senior exec scoffing at the wild idea of the bright-eyed newcomer, dismissing it with, “That’s not how things are done.”
But more importantly, domain expertise is expensive. Traditional professional fields like medicine, law, and finance are some of the most expertise-heavy fields, and they’re also among the most expensive to disrupt. The societal contract in these fields is simple: we pay to delegate decisions about these fields to highly-specialized professionals who possess all the external indicators of expertise: advanced degrees and decades of work experience. And these professionals aren’t cheap. Physicians, for instance, consistently dominate lists of the highest-paid occupations in the U.S.
This reliance on expensive, skilled labor drives up costs, which serves as a barrier to entry for startups. As a result, many successful healthcare startups of the modern era operate on the fringes of healthcare IT or healthcare HR — electronic medical records, insurance tech, or licensing. Few dare to tackle core care delivery because the cost of entry is prohibitively high.
I think a useful metric for the viability of a startup (or any company, really) is the amount of revenue earned per skilled decision made. This ratio is a rough approximation of the operating margins of a company, since the numerator is composed of income from sales or contracts, while the denominator includes the operating expenses of hires needed to make the skilled decisions needed to drive said revenue. Once again, I think others have made the case for the numerator scaling exceptionally well in software companies. Indeed, the marginal cost of producing and distributing another copy of said software is nearly zero. Once you have a winning product, you can saturate its reach in the market nearly instantaneously with relatively little additional cost.
However, I think an oft overlooked factor is that software can also shrink the denominator, by reducing the cost of labor through deskilling roles that traditionally required expensive domain expertise. Consider the illustrative example of Uber or Lyft. It’s true that they innovated by connecting fragmented service providers (drivers) and disparate service users (riders) into a single centralized app. However, I think the importance of infrastructure like Google Maps and Waze to eliminate the expertise needed to navigate complex road systems is understated. Suddenly, a college student with a smartphone might rival a seasoned cab driver in performance and efficiency. Expertise decentralized and externalized.
LLMs are Democratizing Professional Expertise
Large language models (LLMs) are democratizing access to professional knowledge in medicine, law, accounting, and beyond. These tools aren’t perfect, but they’re astonishingly capable. A quick query about tax law in Arizona is instantly helpful, providing quick and largely correct insight about niche topics that might have previously required a tax accountant to answer.
In healthcare, reasoning models like OpenAI’s o1-preview have been shown to outperform human clinicians in certain medical reasoning tasks. I want to begin by pointing out some limitations: this does not indicate that these models are able to replace physicians in all aspects of work. First, ‘medical reasoning’ as measured by the correct identification of a diagnosis or management strategy given a pre-provided patient history may not represent the true nature of a clinician’s work, which often involves manually obtaining these patient histories and making decisions in the face of incomplete or contradictory patient data. What’s more, the scenarios tested were retrieved from cases published online, which usually are cherry-picked or otherwise curated to have a clear or interesting narrative representing rare diseases or diagnoses, rather than the common “bread and butter” complaints seen by most practicing physicians. That said, I believe that the reasoning capabilities of models like o1-preview in professional fields like medicine do in fact represent a significant democratization of domain expertise.
As these models become more and more capable, I believe traditional professional industries will converge on a new kind of service model, where a single trained reasoning system might handle a case load that previously required armies of professionals. Think of a doctor’s expertise distilled into a system that can diagnose and treat patients globally, or a senior lawyer’s knowledge encoded into a model that can draft contracts for thousands of clients simultaneously.
This is especially attractive because bringing together large numbers of patients and their outcomes in a systematic way will produce a continuous stream of data that can feed back into the system to improve its capabilities. Over the course of a single month, such a model might encounter and learn from more patients than a single medical provider might experience in a lifetime. Other positive knock on effects might include improved access: rural and underserved areas may benefit from the professional outputs of such a system where human counterparts are scarce or unwilling to work.
There are obviously limitations in the scope of the expertise that is decentralizable by models – after all, it’s unlikely that some tasks (e.g. taking patient histories, doing surgery) will change without further innovations. However, I don’t see a scenario where current businesses will not be incentivized to use these models in places where they can perform as well as a human counterpart (or even better): e.g. taking notes asynchronously, billing for medical services, creating treatment plans, or preventing errors from occurring at handoff from one team to another. In other words, this shift in expertise will lead to a decoupling of skillsets and tasks in healthcare, with clinicians shifting towards work that may be less structured or less amenable to the outputs of reasoning models.
In posts to come, I’ll explore specific sub-areas (e.g. medical billing, insurance, etc.) above where the decentralization of expertise are most salient (and hence create opportunities for software to penetrate). Stay tuned!