I find that the more expert someone is in machine learning the less worried (or even interested) they are in ideas of existential risk
-
-
Replying to @benedictevans
Perhaps they have a professional (or financial) incentive to protect?
2 replies 1 retweet 30 likes -
Replying to @rabois
No, they don’t think we are anywhere close to anything working
3 replies 0 retweets 14 likes -
Replying to @benedictevans
This is also true of experts in almost any field. Nobody in real estate believed Opendoor was possible. Or in payments, Square or Stripe.
5 replies 1 retweet 35 likes -
Replying to @rabois
That’s a false equivalence. Rather, ask if anyone in tech/internet thought opendoor or square were possible.
4 replies 0 retweets 11 likes -
Replying to @benedictevans @rabois
It’s the people who would actually build the killer AI that say ‘we are nowhere near doing that’
1 reply 0 retweets 2 likes -
Replying to @benedictevans
False. You just have the wrong ones in your portfolio :)
2 replies 0 retweets 14 likes -
-
Replying to @benedictevans
In today's news:https://venturebeat.com/2017/07/25/khosla-ventures-leads-50-million-investment-in-vicarious-ai-tech/amp/ …
2 replies 2 retweets 9 likes -
Replying to @rabois @benedictevans
AI researchers have been trying to emulate the brain for 30+ years. What makes this time different? :)
2 replies 0 retweets 1 like
Niraj, I hear ya. Its a false journey to emulate the human brain. It is IA intelligence amplification not AI. Makes sense seen the right way
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
