It's true that drug discovery has been trending towards bigger and bigger screens for decades, without reducing R&D costs per successful drug at all. ("Eroom's Law.") But I think this is adequately explained by poor predictive validity.
-
Show this thread
-
In order to test whether your bigger, cheaper screen will help select better drug candidates, you have to measure its ability to predict outcomes at much later stages of the development process.
1 reply 0 retweets 0 likesShow this thread -
My impression is that this is hard to coordinate in large organizations with legacy technology; you have to get separate departments (say, validation and discovery) to integrate their data.
2 replies 0 retweets 2 likesShow this thread -
Data integration across departments in large orgs is a HARD human and technical problem. I used to work at Palantir; this was literally the whole job of our company, and our clients fought us tooth and nail.
2 replies 0 retweets 2 likesShow this thread -
"Make early stage drug discovery more predictive of later preclinical efficacy" is hard for Big Pharma to pull off, but because of institutional/organizational/technical-debt problems, *not* because it's intrinsically hard scientifically.
2 replies 0 retweets 2 likesShow this thread -
It's an *easy* statistical problem to evaluate how good your screening is at predicting efficacy outcomes, identify which screening steps are poor predictors, and try to improve them. It's just a hard *social* problem in big orgs.
1 reply 0 retweets 5 likesShow this thread -
This seems like a classical example of "disruption" in the strict sense; there are innovations that big existing companies can't do, not because the people at those companies are stupid, but because the cost of switching their internal tech and processes is enormous.
1 reply 0 retweets 3 likesShow this thread -
A biotech company that's built on data integration from the ground up, such that each screening stage is optimizing for continuous improvement in *predictive validity*, not number of hits, and has predictive validity metrics as OKRs --
1 reply 0 retweets 3 likesShow this thread -
that kind of company would *actually* have incentives aligned to switch to improved screening methods as they become practical. The thing to optimize is not "cost per hit" but "cost per hit that succeeds at the next stage of testing."
2 replies 0 retweets 4 likesShow this thread -
What we need is for clinical-stage investors to understand this logic. It's not about any one screening technology, which ultimately may succeed or fail in producing better clinical results. There are endless arguments about the validity of different screening or animal models.
1 reply 0 retweets 2 likesShow this thread
The point is, the *general class* of improvements in screening platforms is where *all* the money is, and we need biotech companies structured end-to-end around predictive validity.
-
-
(Well-known examples of improvements in predictive validity: drugs validated against human genetic targets are more likely to succeed in the clinic. Also, compounds discovered through phenotypic screening are a majority of successful first-in-class drugs.)
1 reply 0 retweets 0 likesShow this thread -
"Optimize predictive validity" seems like really solid logic to me, and I expect it to seem common-sense to a lot of tech people and scientists, but I expect it sounds really "out there" to seasoned biotech execs, so I especially welcome critical feedback from them.
1 reply 0 retweets 2 likesShow this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.