Although in this case the reasoning seems to go in the other direction: “It’s time to panic about AI because Reasons, and this is the best AI we’ve got, so people must work like that.”
-
Show this thread
-
(In case you aren’t following the topic, I was subtweeting this post from
@slatestarcodex, who is in general excellent and brilliant, but who does not have technical understanding of AI.)https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/ …8 replies 0 retweets 12 likesShow this thread -
Replying to @Meaningness @slatestarcodex
Specifically which part of Scott's post are you criticizing? I note that he never used the word "panic" anywhere in the post; I think you are unfairly attributing to him the kinds of views as in the more clueless headlines, which he wasn't guilty of.
2 replies 0 retweets 1 like -
In particular, while I haven't read them side by side and done an explicit comparison, I don't recall seeing anything in the SSC post that would have been contradicted by the Approximately Correct post.
1 reply 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
David Chapman Retweeted David Chapman
David Chapman added,
David Chapman @MeaningnessReplying to @bho82He doesn’t make a technical argument. He just says “the output of this program superficially resembles things people do.” Which has been true of AI programs since the 1960s, and since then people have gotten excited about superficial resemblances that don’t mean anything.1 reply 0 retweets 0 likes -
Replying to @Meaningness @slatestarcodex
That seems like shifting goalposts from "Scott's argument is shown wrong by this Approximately Correct post" to "here's an unrelated strawman interpretation of what Scott was saying and why it's wrong".
1 reply 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
Scott’s essay is a whole lot of this pattern. Lipton was pointing out that it’s basically a markov sampler, and we know what those do, and it does not involve knowledge, understanding, reasong, or thinking.pic.twitter.com/J8hbEOyWp5
2 replies 0 retweets 1 like -
Replying to @Meaningness @slatestarcodex
Yes, and Scott was saying "this is a dumb process which does not involve knowledge etc. but gets the kinds of results which we thought required those, maybe those are needed for fewer tasks than we thought". No contradiction.pic.twitter.com/Pz4TP9oKpk
1 reply 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
This makes strong claims about humans that are also false. Advocates can try to wriggle out by saying “well we don’t have a good definition of thinking, reasoning, intellence, etc” but that’s sophistry.
1 reply 0 retweets 0 likes -
Replying to @Meaningness @slatestarcodex
Which claims does it make that are false?
1 reply 0 retweets 0 likes
All the ones of the form “it did X” where X is some sort if human mental act. It didn’t.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.