I'm not a doomsday AI guy at all, but I did find your book too dismissive of their claims. Bostrom's Superintelligence is worth reading
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Why are you so dismissive of AI risks? Can you please list serious AI literature that you’ve read/engaged with (eg journal articles, books like Bostrom’s Superintelligence). I want to be proved wrong on thinking you’re being glib
-
The literature I've seen tends to just assume a model where intelligence has been the key bottleneck for most human progress, and where minds are a generic 'intelligence algorithm' attached to arbitrary goals, neither of which seem at all obvious. (Do you disagree?)
-
If even start with these 3 you can't help but take the risks more seriously than does
@sapinker : Stuart Russell: https://futureoflife.org/data/documents/research_priorities.pdf … Bostrom: https://nickbostrom.com/ethics/artificial-intelligence.pdf …@ESYudkowsky Yudkowsky: https://intelligence.org/files/AIPosNegFactor.pdf … -
I'm very familiar with the arguments on this. I maintain the assumptions I mention are very insufficiently defended, & are largely just taken as given. I agree the issue is potentially real but would like to see far more investigation into validity of these key supporting claims.
-
This is the whole point: the field of AI research is full of uncertainty. It could go really well for humanity, or equally it could go terribly. Given the downside risks it’s important to a) take AI risks seriously and b) do more research to remove the uncertainty
-
If there’s even a 0.1% chance of an existential risk then it is worth taking seriously. Yes? The expected value covers not just us but all future generations who may never see the light of day (read Derek Parfit’s On What Matters for why future generations are morally relevant)
- 5 more replies
New conversation -
-
-
This is from 2014
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
If you don’t understand AI concerns it’s irresponsible to dismiss them.
-
Rococo's Bakelite is to AI risk mitigation as astrology is to astronomy.
-
"from the guys who think you should worry about AI turning you into paperclips" dismissing Bostrom like that totally disqualifies one as having put any thought into this issue.
End of conversation
New conversation -
-
-
That article's four years old, dude.
-
They mentioned it on HBO’s Silicon Valley recently so now everyone is dredging this up.
End of conversation
New conversation -
-
-
Reminds me of Pascal's Wager.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It not new - that article is from 2014 and the actual thought experiment is from ~2010.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I've been fighting for good AI to have the right to be developed free from regulation because it's clear major corporations are developing evil AI while at the same time trying to pass regulations to prevent competitors developing a good AI to discover what the bad AI did.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This is something to take seriously: if DNC builds AI, it will destroy us, as promised. – at Mt. Calvary Cemetary
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.