In drug development, where you have a very long sequence of filters searching for molecules that treat diseases, the most high-leverage way to reduce R&D costs per *successful* drug is to increase the predictive validity of the early screening steps.
It's true that drug discovery has been trending towards bigger and bigger screens for decades, without reducing R&D costs per successful drug at all. ("Eroom's Law.") But I think this is adequately explained by poor predictive validity.
-
-
In order to test whether your bigger, cheaper screen will help select better drug candidates, you have to measure its ability to predict outcomes at much later stages of the development process.
Show this thread -
My impression is that this is hard to coordinate in large organizations with legacy technology; you have to get separate departments (say, validation and discovery) to integrate their data.
Show this thread -
Data integration across departments in large orgs is a HARD human and technical problem. I used to work at Palantir; this was literally the whole job of our company, and our clients fought us tooth and nail.
Show this thread -
"Make early stage drug discovery more predictive of later preclinical efficacy" is hard for Big Pharma to pull off, but because of institutional/organizational/technical-debt problems, *not* because it's intrinsically hard scientifically.
Show this thread -
It's an *easy* statistical problem to evaluate how good your screening is at predicting efficacy outcomes, identify which screening steps are poor predictors, and try to improve them. It's just a hard *social* problem in big orgs.
Show this thread -
This seems like a classical example of "disruption" in the strict sense; there are innovations that big existing companies can't do, not because the people at those companies are stupid, but because the cost of switching their internal tech and processes is enormous.
Show this thread -
A biotech company that's built on data integration from the ground up, such that each screening stage is optimizing for continuous improvement in *predictive validity*, not number of hits, and has predictive validity metrics as OKRs --
Show this thread -
that kind of company would *actually* have incentives aligned to switch to improved screening methods as they become practical. The thing to optimize is not "cost per hit" but "cost per hit that succeeds at the next stage of testing."
Show this thread -
What we need is for clinical-stage investors to understand this logic. It's not about any one screening technology, which ultimately may succeed or fail in producing better clinical results. There are endless arguments about the validity of different screening or animal models.
Show this thread -
The point is, the *general class* of improvements in screening platforms is where *all* the money is, and we need biotech companies structured end-to-end around predictive validity.
Show this thread -
(Well-known examples of improvements in predictive validity: drugs validated against human genetic targets are more likely to succeed in the clinic. Also, compounds discovered through phenotypic screening are a majority of successful first-in-class drugs.)
Show this thread -
"Optimize predictive validity" seems like really solid logic to me, and I expect it to seem common-sense to a lot of tech people and scientists, but I expect it sounds really "out there" to seasoned biotech execs, so I especially welcome critical feedback from them.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.