Skip to content
  • Home Home Home, current page.
  • Moments Moments Moments, current page.

Saved searches

  • Remove
  • In this conversation
    Verified accountProtected Tweets @
Suggested users
  • Verified accountProtected Tweets @
  • Verified accountProtected Tweets @
  • Language: English
    • Bahasa Indonesia
    • Bahasa Melayu
    • Català
    • Čeština
    • Dansk
    • Deutsch
    • English UK
    • Español
    • Filipino
    • Français
    • Hrvatski
    • Italiano
    • Magyar
    • Nederlands
    • Norsk
    • Polski
    • Português
    • Română
    • Slovenčina
    • Suomi
    • Svenska
    • Tiếng Việt
    • Türkçe
    • Ελληνικά
    • Български език
    • Русский
    • Српски
    • Українська мова
    • עִבְרִית
    • العربية
    • فارسی
    • मराठी
    • हिन्दी
    • বাংলা
    • ગુજરાતી
    • தமிழ்
    • ಕನ್ನಡ
    • ภาษาไทย
    • 한국어
    • 日本語
    • 简体中文
    • 繁體中文
  • Have an account? Log in
    Have an account?
    · Forgot password?

    New to Twitter?
    Sign up
ESYudkowsky's profile
Eliezer Yudkowsky
Eliezer Yudkowsky
Eliezer Yudkowsky
Verified account
@ESYudkowsky

Tweets

Eliezer YudkowskyVerified account

@ESYudkowsky

Ours is the era of inadequate AI alignment theory. Any other facts about this era are relatively unimportant, but sometimes I tweet about them anyway.

Joined June 2014

Tweets

  • © 2018 Twitter
  • About
  • Help Center
  • Terms
  • Privacy policy
  • Cookies
  • Ads info
Dismiss
Previous
Next

Go to a person's profile

Saved searches

  • Remove
  • In this conversation
    Verified accountProtected Tweets @
Suggested users
  • Verified accountProtected Tweets @
  • Verified accountProtected Tweets @

Promote this Tweet

Block

  • Tweet with a location

    You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more

    Your lists

    Create a new list


    Under 100 characters, optional

    Privacy

    Copy link to Tweet

    Embed this Tweet

    Embed this Video

    Add this Tweet to your website by copying the code below. Learn more

    Add this video to your website by copying the code below. Learn more

    Hmm, there was a problem reaching the server.

    By embedding Twitter content in your website or app, you are agreeing to the Twitter Developer Agreement and Developer Policy.

    Preview

    Why you're seeing this ad

    Log in to Twitter

    · Forgot password?
    Don't have an account? Sign up »

    Sign up for Twitter

    Not on Twitter? Sign up, tune into the things you care about, and get updates as they happen.

    Sign up
    Have an account? Log in »

    Two-way (sending and receiving) short codes:

    Country Code For customers of
    United States 40404 (any)
    Canada 21212 (any)
    United Kingdom 86444 Vodafone, Orange, 3, O2
    Brazil 40404 Nextel, TIM
    Haiti 40404 Digicel, Voila
    Ireland 51210 Vodafone, O2
    India 53000 Bharti Airtel, Videocon, Reliance
    Indonesia 89887 AXIS, 3, Telkomsel, Indosat, XL Axiata
    Italy 4880804 Wind
    3424486444 Vodafone
    » See SMS short codes for other countries

    Confirmation

     

    Welcome home!

    This timeline is where you’ll spend most of your time, getting instant updates about what matters to you.

    Tweets not working for you?

    Hover over the profile pic and click the Following button to unfollow any account.

    Say a lot with a little

    When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.

    Spread the word

    The fastest way to share someone else’s Tweet with your followers is with a Retweet. Tap the icon to send it instantly.

    Join the conversation

    Add your thoughts about any Tweet with a Reply. Find a topic you’re passionate about, and jump right in.

    Learn the latest

    Get instant insight into what people are talking about now.

    Get more of what you love

    Follow more accounts to get instant updates about topics you care about.

    Find what's happening

    See the latest conversations about any topic instantly.

    Never miss a Moment

    Catch up instantly on the best stories happening as they unfold.

    1. Daemon Todd‏ @daemontodd 9 Dec 2017
      Replying to @daemontodd @davidmanheim and

      I think it's reasonable to say that, all else equal, more intelligent systems will be better at self-improvement. Problem is, 1) all else is not equal, and 2) even if it were, the *rate* of self-improvement is what the FOOM claim is really all about. 2/

      1 reply 0 retweets 0 likes
    2. Daemon Todd‏ @daemontodd 9 Dec 2017
      Replying to @daemontodd @davidmanheim and

      Information-theoretically, the more intelligent a system is, the more complex its neural net (assuming that's the framework). Which means the more intelligence is required to modify the neural net in a positive direction. How do these counterbalance? 3/

      2 replies 0 retweets 0 likes
    3. Daemon Todd‏ @daemontodd 9 Dec 2017
      Replying to @daemontodd @davidmanheim and

      From this info alone, one can only conclude "we don't know", and then proceed to do robust information-theoretic approximations to try to find out. I haven't seen anyone address this question in a way that's at all robust, most significantly EY, who bears the burden of proof. 4/

      1 reply 0 retweets 0 likes
    4. Daemon Todd‏ @daemontodd 9 Dec 2017
      Replying to @daemontodd @davidmanheim and

      Regarding self-improvement, there are two more unproven claims. FOOM doesn't mean that AGI recursively improves -- it means it recursively improves *at a sufficiently high rate* and *with a sufficiently high sigmoid bound*. Neither are robustly argued at all, IMO. 5/5

      1 reply 0 retweets 1 like
    5. David Manheim‏ @davidmanheim 10 Dec 2017
      Replying to @daemontodd @robinhanson @ESYudkowsky

      And I'm looking at rate of improvement now, and seeing exponential growth - I've seen little argument for why that growth slows/stops. And I see little reason to think the sigmoid bound is different than overall limit to intelligence, which you admit is unlikely to be near-human.

      2 replies 0 retweets 1 like
    6. David Manheim‏ @davidmanheim 10 Dec 2017
      Replying to @davidmanheim @daemontodd and

      Note: I didn't say foom, and Eliezer's arguments don't require it. AGI that goes from human to superhuman in 1 hour could be unlikely (though I'm unsure why it would be,) but AGI that goes from human to superhuman in a month is still an extinction event.

      1 reply 0 retweets 2 likes
    7. Robin Hanson‏Verified account @robinhanson 10 Dec 2017
      Replying to @davidmanheim @daemontodd @ESYudkowsky

      A system going in a month from small on the global scale to taking over the world is a foom.

      1 reply 0 retweets 0 likes
    8. David Manheim‏ @davidmanheim 10 Dec 2017
      Replying to @robinhanson @daemontodd @ESYudkowsky

      OK, but at the current trajectory, (without acceleration,) the speed of transition in narrow domains for AI from human-equivalent to without conceivable human peer is measured in months, not years.

      1 reply 0 retweets 0 likes
    9. David Manheim‏ @davidmanheim 10 Dec 2017
      Replying to @davidmanheim @robinhanson and

      And as we've recently seen, an architecture adapted to a specific narrow task seems possible to be slightly broadened and improved rapidly. Your claim seems to be that in sufficiently general domains, there is a point where this stops being true - is that correct?

      1 reply 0 retweets 0 likes
    10. Robin Hanson‏Verified account @robinhanson 10 Dec 2017
      Replying to @davidmanheim @daemontodd @ESYudkowsky

      We have long known of a few simple general tools. Sometimes new ones are discovered, & sometimes old ones are generalized a bit. But AGI requires a great many tools & details. I just won't be done via one or a few general tools.

      2 replies 0 retweets 2 likes
      Eliezer Yudkowsky‏Verified account @ESYudkowsky 10 Dec 2017
      Replying to @robinhanson @davidmanheim @daemontodd

      Alpha Zero was made more general by simplifying it, not by adding complexity, in a process that reduced its computational cost and sample complexity as well.

      9:46 AM - 10 Dec 2017
      • 12 Likes
      • Max Reddel aldana Liberty Elliot Olds Identifier nothingmuch Håvard Ihle max kesin David Manheim
      2 replies 0 retweets 12 likes
        1. New conversation
        2. Robin Hanson‏Verified account @robinhanson 10 Dec 2017
          Replying to @ESYudkowsky @davidmanheim @daemontodd

          I don't disagree. But isn't simplifying often a route to generality?

          1 reply 0 retweets 2 likes
        3. Clint Ehrlich‏ @ClintEhrlich 10 Dec 2017
          Replying to @robinhanson @ESYudkowsky and

          Yes, but its efficacy on hard problems is evidence for foom, since generalization via simplification is the route most likely to produce sudden omni-domain increases in capability.

          2 replies 0 retweets 1 like
        4. Daemon Todd‏ @daemontodd 10 Dec 2017
          Replying to @ClintEhrlich @robinhanson and

          Chris, that isn't actually true. If an algorithm is written with optimal efficiency (leaving aside precisely what that means), then there's a sense in which the total "power" of that algorithm is determined by the # of bits in the algorithm.

          1 reply 0 retweets 0 likes
        5. Daemon Todd‏ @daemontodd 10 Dec 2017
          Replying to @daemontodd @ClintEhrlich and

          You get a "Heisenberg-like" tradeoff, where as you generalize, you say less about more, and when you specialize you say more about less, but regardless, your total information is bounded above.

          1 reply 0 retweets 0 likes
        6. Daemon Todd‏ @daemontodd 10 Dec 2017
          Replying to @daemontodd @ClintEhrlich and

          The fact that AlphaZero was (in some sense) less complex and more effective than AlphaGoZero shows that the former "structurally" improved on the latter, analogous to removing deadweight. But of course, doing that can only take you so far.

          2 replies 0 retweets 1 like
        7. Clint Ehrlich‏ @ClintEhrlich 11 Dec 2017
          Replying to @daemontodd @robinhanson and

          The question is whether it's also far enough to produce sudden, superhuman performance in other domains. It would be useful if skeptics would start anchoring their perspective with predictions: where would you need to see progress before taking imminent arrival of AGI seriously?

          2 replies 0 retweets 0 likes
        8. Clint Ehrlich‏ @ClintEhrlich 11 Dec 2017
          Replying to @ClintEhrlich @daemontodd and

          Example of anchoring: Books published as recently as this year discounted the general intelligence of chess AI because it did not "prune" possibilities like a human player. AlphaZero now exhibits exactly that behaviour.

          1 reply 0 retweets 0 likes
        9. David Manheim‏ @davidmanheim 11 Dec 2017
          Replying to @ClintEhrlich @daemontodd and

          You should read through the earlier linked facebook discussion on EY's post from a few months back which discusses most of these points; https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155848934404228 …

          1 reply 0 retweets 0 likes
        10. 7 more replies
        1. Daemon Todd‏ @daemontodd 10 Dec 2017
          Replying to @ESYudkowsky @robinhanson @davidmanheim

          EY, yes, but it's clearly not indefinitely practical to improve performance by reducing complexity. When something is badly built, performance can be improved by removing obstructions. Past that, increased performance requires increased complexity (e.g. human vs mouse brains).

          0 replies 0 retweets 1 like
          Thanks. Twitter will use this to make your timeline better. Undo
          Undo

      Loading seems to be taking a while.

      Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

        Promoted Tweet

        false

        • © 2018 Twitter
        • About
        • Help Center
        • Terms
        • Privacy policy
        • Cookies
        • Ads info