This is a massive strawman and doesn’t even remotely address the actual concerns that AI Safety researchers have
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Well, the author of this essay is apparently oblivious to the paperclip argument (not that SAI will want to "make paperclips" for reasons of its own, but because we told it to do so, not realizing...) and the other link is just an ad for your latest book.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This argument is already outdated by the recent achievement of AlphaZero. It taught itself chess and beat the next-best chess program in the world, easily. The worry is that a future A.I. may decide that there are far easier ways to 'win at chess' than through fair competition...
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thought this was an excellent article - at least in the last six or so paragraphs.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
More worried about artificial intelligence error and applications. My Instagram app fails at least half-dozen times daily; Snapchat is pushing our collective psyche into a Disney-world and frankly most of our software doesn't (SORRY AN UNEXPECTED ERROR HAS OCCURRED. TRY LATER...)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
There isn’t an argument against AI that doesnt also apply to having children.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
...eagerly looking forward to the AI-written rebuttal.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I have to agree these are mostly straw men counters. I think most people are just saying the potential for damage if you get this wrong is high, and unintended consequences are unpredictable, so be very careful to limit that downside if you can
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It could irreversibly change our society into a dystopia; where we have a few trillionnaires, and a society that cannot differentiate truth from falsehood (a Brave New World)
-
That there's irrational anxiety around AI, takes nothing away from the fact that there's plenty to fear rationally. We need similar sentiments as what we shared on nuclear power. Our attitude must be inherently one of caution, when it comes to greatest power we will ever invent.
End of conversation
New conversation -
-
-
"The most super-powered AI could also have the ambitions of a three-toed sloth or the temperament of a panda bear because it will have whatever emotions we wish to give it. " This alone betrays so much ignorance.
-
First, who has EVER proposed anything like programming emotions to AI? They're programmed to have goals, not human-like evolved psychology.
End of conversation
New conversation -
-
-
People are still confusing AI with Machine Learning with good reason (no pun intended). Most Data Scientists, SW developers, Data Architects can't even agree on the differences - the line is very fuzzy.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Fantastic stuff. Very interesting viewpoint. My only counter argument is that AI IS being developed by humans, so I would worry about AI machines taking on ugly human traits. Even a few bad actors could spell disaster. Maybe that’s my genes talking.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
With greatest respect, AI, if truly advanced, would not be in need of paper clips.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.