“Well, yeah, OK, so it’s a dumb algorithm that just splices together bits of text it found on the internet, but that’s all we humans do too, so it’s time to panic” is a nice illustration of my theory here about why people have ignorant opinions about AI:https://twitter.com/Meaningness/status/1096866752456060928 …
-
Show this thread
-
Although in this case the reasoning seems to go in the other direction: “It’s time to panic about AI because Reasons, and this is the best AI we’ve got, so people must work like that.”
2 replies 0 retweets 13 likesShow this thread -
(In case you aren’t following the topic, I was subtweeting this post from
@slatestarcodex, who is in general excellent and brilliant, but who does not have technical understanding of AI.)https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/ …8 replies 0 retweets 12 likesShow this thread -
Replying to @Meaningness @slatestarcodex
Specifically which part of Scott's post are you criticizing? I note that he never used the word "panic" anywhere in the post; I think you are unfairly attributing to him the kinds of views as in the more clueless headlines, which he wasn't guilty of.
2 replies 0 retweets 1 like -
In particular, while I haven't read them side by side and done an explicit comparison, I don't recall seeing anything in the SSC post that would have been contradicted by the Approximately Correct post.
1 reply 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
David Chapman Retweeted David Chapman
David Chapman added,
David Chapman @MeaningnessReplying to @bho82He doesn’t make a technical argument. He just says “the output of this program superficially resembles things people do.” Which has been true of AI programs since the 1960s, and since then people have gotten excited about superficial resemblances that don’t mean anything.1 reply 0 retweets 0 likes -
Replying to @Meaningness @slatestarcodex
That seems like shifting goalposts from "Scott's argument is shown wrong by this Approximately Correct post" to "here's an unrelated strawman interpretation of what Scott was saying and why it's wrong".
1 reply 0 retweets 1 like -
Replying to @xuenay @slatestarcodex
Scott’s essay is a whole lot of this pattern. Lipton was pointing out that it’s basically a markov sampler, and we know what those do, and it does not involve knowledge, understanding, reasong, or thinking.pic.twitter.com/J8hbEOyWp5
2 replies 0 retweets 1 like
The contemporary ML version of the fundamental AI method for fooling yourself:pic.twitter.com/ST4hQoowbe
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.