Why there will never* be an Apple of the life sciences
*Never say never
In the last century, humanity mastered computation and data storage. In this century, humanity will master biology. It’s easy to believe that biological mastery will change the kinds of companies we can build.
This is why I’m always tempted by the comparisons we might draw between the life sciences and the computer industry. It is dangerous terrain. It’s easy to say things that sound smart, but are actually…not.
So let’s start with a simple question: what would it mean to build the “Apple of the life sciences,” and will it ever be possible?
You may have heard that Apple, Inc. is currently the world’s most valuable company, an oft-cited fact that still fails to convey Apple’s full power. I prefer this one: Apple’s iPhone is estimated to account for about one out of every ten handsets sold, but captures a third of the market’s revenue, and nearly 70% of the profits.
No matter what you do in the smartphone business, all the dollars seem to find their way back to Apple.
The computer industry more broadly has a reputation for winner-take-all dynamics. The original boogeyman was IBM. In desktop PCs it was Intel and Microsoft. In online advertising it’s Google and Facebook. In retail, cloud, and logistics, it’s Amazon.
What makes this kind of dominance possible? Does it exist in the life sciences?
I don’t think it does. We certainly have our monopolists — sequencing comes to mind, as do the many sole-source drugs whose high prices drive so much antagonism. But there is nothing in biotech like the industry-wide leverage that the computer oligarchs wield.
Here are two thoughts on why that might be.
No technology stack
Computers are built from a handful of components, all fabricated on silicon and glass: microprocessor, memory, storage, radio, display.
Applications use a handful of functions, all built into modern computers: scheduling, file system, memory management, networking, user interface.
Internet services connect users to data, or to one another, using a suite of applications: communications, text, photo & video, user identity.
These technologies form a stack. The big component companies (Intel, Samsung, Corning) control the component level. The operating system vendors (Apple, Google, Microsoft) control the application level. The services companies (Google, Facebook, Amazon) dominate the top layer.
Compare this orderly tech stack to the situation in the life sciences. Right after you stop laughing.
There’s no comparison to be made.
Drugs? Until we invent nanites, drugs will be a collection of small molecules, antibodies, cell therapies, and any other clever packages we come up with. Drug manufacturing? Without a replicator, that’s a hodgepodge of chemical synthesis and tissue culture, all highly customized for each drug. Diagnostics? There’s no common platform that can probe nucleic acids and proteins and intact cells and tissues. Sorry, tricorder fans.
The chaotic technical landscape is mirrored in the business landscape. There’s no hegemonic drug manufacturer, the way TSMC rules chip fabrication. There’s no single test we all receive, nor a drug we all take, the way everyone buys a smartphone. Perhaps there never will be.
Subsegments, not super-segments
Back in the 1940s, computers were mainframes, a market of “maybe five computers.” By the 1970s, minicomputers were making rapid calculations for businesses around the world, but there was still “no reason anyone would want a computer in their home.” By the late 1990s, there was a “computer on every desk and in every home” that could afford one. And now we all have the internet in our pockets, on our coffee tables, and in our cars. Software is taking over retail, transportation, and hospitality.
Computing is like a bizzaro Russian doll, where every new doll you open has a bigger doll inside it.
Life sciences does not work this way. Healthcare is the only industry that has always reached every human, eventually. As medicine advances, each new therapy and diagnostic applies to a subsegment of an existing market, almost by definition. A new cancer therapy, for example, will be approved for one branch on the standard of care of a specific cancer. It takes years to expand the use of the drug, and therefore the addressable market.
But maybe just wait?
To summarize: In the life sciences, there’s no common technology stack, so every new product is totally custom. There’s also no bigger market or user base waiting to be tapped that will expand the market.
If you take those observations to be a prediction of the future, it implies the life sciences will continue to be a sea of loosely related technologies, with no source of common scale or unifying architecture to enable faster product development in the future.
Maybe that’s true. But.
It’s always possible that the life sciences today is where computing was back in the 1950s: a community of sophisticated technologists working with tools that history would shortly prove were stunningly crude. Perhaps the first layer in the life science stack — our transistor — is just around the corner.
What would the first layer in a new life science stack look like? Universal vaccines, maybe. Or using diagnostic sequencing to replace a combination of PCR, clinical chemistry, and pathology. These are both examples of super-segmentation. They would represent a reversal of the trend toward ever-smaller segmentation.
I won’t claim that this kind of transition is inevitable. But if it’s possible, it sure seems like this is the century we’d see it begin.
If you liked this, please clap so I know what’s working. If you want to know when new posts drop, sign up for my weekly email.