“Well, yeah, OK, so it’s a dumb algorithm that just splices together bits of text it found on the internet, but that’s all we humans do too, so it’s time to panic” is a nice illustration of my theory here about why people have ignorant opinions about AI:https://twitter.com/Meaningness/status/1096866752456060928 …
-
Show this thread
-
Although in this case the reasoning seems to go in the other direction: “It’s time to panic about AI because Reasons, and this is the best AI we’ve got, so people must work like that.”
2 replies 0 retweets 13 likesShow this thread -
(In case you aren’t following the topic, I was subtweeting this post from
@slatestarcodex, who is in general excellent and brilliant, but who does not have technical understanding of AI.)https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/ …8 replies 0 retweets 12 likesShow this thread -
Replying to @Meaningness @slatestarcodex
Specifically which part of Scott's post are you criticizing? I note that he never used the word "panic" anywhere in the post; I think you are unfairly attributing to him the kinds of views as in the more clueless headlines, which he wasn't guilty of.
2 replies 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
The final paragraph seems to be the central point. It’s not “panic!” but seems to be saying “these reasons not to panic are wrong.” 1/2pic.twitter.com/jyl5ZfgEAc
2 replies 0 retweets 2 likes -
“The people who say there is no fire in the theatre, and that the images of a fire on the movie screen were not the same thing, so people panicking about that can calm down—they have not proven there is no fire in the theater.” 2/2
1 reply 0 retweets 1 like -
Replying to @Meaningness @slatestarcodex
That paragraph is just saying that AGI is possible in principle and that current research might eventually lead there, even though it might take "a hundred or a thousand years" (previous paragraph)?
2 replies 0 retweets 3 likes -
Replying to @xuenay @slatestarcodex
If that were the point of the article, it wouldn’t need any examples. Instead he makes a lot of very strong claims about understanding, learning, thinking, reasoning, and knowledge, which are completely false.
1 reply 0 retweets 3 likes -
Replying to @Meaningness @slatestarcodex
It wouldn't need any examples to convince *you*, but the many people who think that AGI is impossible even in principle would be unlikely to be persuaded without any new examples. Also, what strong claims?
1 reply 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
This ascribes thinking, attempting, and invention to a program that is definitely not captof any of those things.pic.twitter.com/IoSvyxs5bh
3 replies 0 retweets 1 like
Other examples: “figured out” “understands that” “has skills” “precision of thought” “it learned what a gun permit was” “notice” etc. It can’t do any of those things!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.