Chilling insight from a colleague: current tech founders in AI space are relatively altruistic, but as field matures the business CEOs will take over and make it way more profit-oriented. Happened in several other fields. This might be the friendly good old days of AI.
-
-
Replying to @anderssandberg
Only a problem if one equates profit with bad/evil. Also pretty sure current tech founders aren't completely disinterested in profit-making or they'd all just create NGOs.
1 reply 0 retweets 1 like -
Replying to @sebkrier
I think profits are great. But what are they used for? I rather see them used for Mars, solving ageing, renewable or fixing incarceration than maximizing shareholder value.
1 reply 0 retweets 8 likes -
Replying to @anderssandberg @sebkrier
Going to Mars, solving aging, or creating renewable energy would be hugely profitable once achieved, but investing in *developing* such technologies is rarely the most profitable use of, say, a million dollars over a five-year period.
1 reply 1 retweet 8 likes -
I'm starting to think that R&D must be "subsidized" by something besides financial motive. Whether that's "scientists + engineers taking below-market wages or spending their own savings to build a cool thing" or "taxpayer dollars" or "investors who want the cool thing to exist".
3 replies 1 retweet 9 likes -
or "executives at large firms who want the cool thing to exist"
1 reply 0 retweets 2 likes -
It doesn't have to be an altruistic motive -- wanting to personally go to Mars or live longer is a selfish motive! As is wanting to work on interesting problems with fun people! But it's intrinsic interest in the thing for its own sake rather than just ROI.
1 reply 0 retweets 7 likes -
In principle, the ROI on research can be spectacularly good. In practice, the point at which even the most "radical" professional investors shell out is usually long after the basic premise has been de-risked a LOT, usually in academia.
2 replies 0 retweets 7 likes -
Not sure quite sure what you’re implying. BenevolentAI’s business model really incentivizes them to sell before they prove their basic premise (“machine learning on a text database can predict drug effectiveness well enough to improve on conventional drug discovery”).
1 reply 0 retweets 2 likes
Let me make up an example of how this is an epistemic “near occasion of sin.” How do we know their knowledge graph actually predicts drug targets accurately? The easiest thing you might do is give an accuracy number: their algorithm spits out a list of targets that overlap 99%...
-
-
Replying to @s_r_constantin @skrish_13 and
with the real known drug targets. That sounds good, right? But their dataset includes the research literature! They’re “predicting” targets *after* scientists have already discovered them! Unless their sample comes with a time cutoff.
1 reply 0 retweets 2 likes -
Replying to @s_r_constantin @skrish_13 and
I’m not saying they did a cheat like this; I would have no way of knowing. I’m saying it’s *super* easy to slip under the rug in a 10-slide deck to AstraZeneca, if there’s nobody in the room who knows statistics.
1 reply 0 retweets 1 like - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.