(In case you aren’t following the topic, I was subtweeting this post from @slatestarcodex, who is in general excellent and brilliant, but who does not have technical understanding of AI.)https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/ …
-
Show this thread
-
Replying to @Meaningness @slatestarcodex
Specifically which part of Scott's post are you criticizing? I note that he never used the word "panic" anywhere in the post; I think you are unfairly attributing to him the kinds of views as in the more clueless headlines, which he wasn't guilty of.
2 replies 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
The final paragraph seems to be the central point. It’s not “panic!” but seems to be saying “these reasons not to panic are wrong.” 1/2pic.twitter.com/jyl5ZfgEAc
2 replies 0 retweets 2 likes -
“The people who say there is no fire in the theatre, and that the images of a fire on the movie screen were not the same thing, so people panicking about that can calm down—they have not proven there is no fire in the theater.” 2/2
1 reply 0 retweets 1 like -
Replying to @Meaningness @slatestarcodex
That paragraph is just saying that AGI is possible in principle and that current research might eventually lead there, even though it might take "a hundred or a thousand years" (previous paragraph)?
2 replies 0 retweets 3 likes -
Replying to @xuenay @slatestarcodex
If that were the point of the article, it wouldn’t need any examples. Instead he makes a lot of very strong claims about understanding, learning, thinking, reasoning, and knowledge, which are completely false.
1 reply 0 retweets 3 likes -
Replying to @Meaningness @slatestarcodex
It wouldn't need any examples to convince *you*, but the many people who think that AGI is impossible even in principle would be unlikely to be persuaded without any new examples. Also, what strong claims?
1 reply 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
This ascribes thinking, attempting, and invention to a program that is definitely not captof any of those things.pic.twitter.com/IoSvyxs5bh
3 replies 0 retweets 1 like -
Replying to @Meaningness @slatestarcodex
People routinely use those words for describing all kinds of things they know are not capable of them. I've said my phone "is confused about where it is" when it lost its GPS signal. Intentional language doesn't imply belief in human equivalence.
2 replies 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
But afaict Scott is very deliberately asserting that equivalence.
1 reply 0 retweets 1 like
If his essay was “wow, it’s amusing to pretend that this spam generator is really thinking, although clearly it isn’t in the least!” everyone’s reaction would be “I guess if that makes you happy, whatever”
-
-
Replying to @Meaningness @slatestarcodex
I don't really see why you would want to interpret Scott as making stronger claims than as if he was just using intentional language in the normal way? If the normal meanings of words get a more reasonable text than unusual meanings, then go for the standard meaning?
1 reply 0 retweets 3 likes -
Replying to @xuenay @slatestarcodex
First, this is not the normal way; the normal use of "thinking" is to denote thinking. In context "the dishwasher thinks it's finished, but actually the motor is stuck" is perfectly understandable but is a metaphorical and humorous extension.
2 replies 0 retweets 1 like - 18 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.