Skip to content
By using Twitter’s services you agree to our Cookies Use. We and our partners operate globally and use cookies, including for analytics, personalisation, and ads.
  • Home Home Home, current page.
  • About

Saved searches

  • Remove
  • In this conversation
    Verified accountProtected Tweets @
Suggested users
  • Verified accountProtected Tweets @
  • Verified accountProtected Tweets @
  • Language: English
    • Bahasa Indonesia
    • Bahasa Melayu
    • Català
    • Čeština
    • Dansk
    • Deutsch
    • English UK
    • Español
    • Filipino
    • Français
    • Hrvatski
    • Italiano
    • Magyar
    • Nederlands
    • Norsk
    • Polski
    • Português
    • Română
    • Slovenčina
    • Suomi
    • Svenska
    • Tiếng Việt
    • Türkçe
    • Ελληνικά
    • Български език
    • Русский
    • Српски
    • Українська мова
    • עִבְרִית
    • العربية
    • فارسی
    • मराठी
    • हिन्दी
    • বাংলা
    • ગુજરાતી
    • தமிழ்
    • ಕನ್ನಡ
    • ภาษาไทย
    • 한국어
    • 日本語
    • 简体中文
    • 繁體中文
  • Have an account? Log in
    Have an account?
    · Forgot password?

    New to Twitter?
    Sign up
GaryMarcus's profile
Gary Marcus
Gary Marcus
Gary Marcus
@GaryMarcus

Tweets

Gary Marcus

@GaryMarcus

CEO/Founder of http://Robust.AI ; cognitive scientist, and best-selling author. New book: http://Rebooting.AI : Building Artificial Intelligence We Can Trust

garymarcus.com
Joined December 2010

Tweets

  • © 2019 Twitter
  • About
  • Help Center
  • Terms
  • Privacy policy
  • Imprint
  • Cookies
  • Ads info
Dismiss
Previous
Next

Go to a person's profile

Saved searches

  • Remove
  • In this conversation
    Verified accountProtected Tweets @
Suggested users
  • Verified accountProtected Tweets @
  • Verified accountProtected Tweets @

Promote this Tweet

Block

  • Tweet with a location

    You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more

    Your lists

    Create a new list


    Under 100 characters, optional

    Privacy

    Copy link to Tweet

    Embed this Tweet

    Embed this Video

    Add this Tweet to your website by copying the code below. Learn more

    Add this video to your website by copying the code below. Learn more

    Hmm, there was a problem reaching the server.

    By embedding Twitter content in your website or app, you are agreeing to the Twitter Developer Agreement and Developer Policy.

    Preview

    Why you're seeing this ad

    Log in to Twitter

    · Forgot password?
    Don't have an account? Sign up »

    Sign up for Twitter

    Not on Twitter? Sign up, tune into the things you care about, and get updates as they happen.

    Sign up
    Have an account? Log in »

    Two-way (sending and receiving) short codes:

    Country Code For customers of
    United States 40404 (any)
    Canada 21212 (any)
    United Kingdom 86444 Vodafone, Orange, 3, O2
    Brazil 40404 Nextel, TIM
    Haiti 40404 Digicel, Voila
    Ireland 51210 Vodafone, O2
    India 53000 Bharti Airtel, Videocon, Reliance
    Indonesia 89887 AXIS, 3, Telkomsel, Indosat, XL Axiata
    Italy 4880804 Wind
    3424486444 Vodafone
    » See SMS short codes for other countries

    Confirmation

     

    Welcome home!

    This timeline is where you’ll spend most of your time, getting instant updates about what matters to you.

    Tweets not working for you?

    Hover over the profile pic and click the Following button to unfollow any account.

    Say a lot with a little

    When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.

    Spread the word

    The fastest way to share someone else’s Tweet with your followers is with a Retweet. Tap the icon to send it instantly.

    Join the conversation

    Add your thoughts about any Tweet with a Reply. Find a topic you’re passionate about, and jump right in.

    Learn the latest

    Get instant insight into what people are talking about now.

    Get more of what you love

    Follow more accounts to get instant updates about topics you care about.

    Find what's happening

    See the latest conversations about any topic instantly.

    Never miss a Moment

    Catch up instantly on the best stories happening as they unfold.

    1. Jelle Zuidema‏ @wzuidema Aug 25
      • Report Tweet
      Replying to @wzuidema @GaryMarcus

      tree structure, or compositionality, at least under reasonable definitions. Not reasonable: picking your favorite symbolic system, checking whether a NN learns exactly that, and interpreting failure as evidence that the whole class cannot be learned.

      2 replies 0 retweets 0 likes
    2. Beau Sievers‏ @beausievers Aug 25
      • Report Tweet
      Replying to @wzuidema @GaryMarcus

      I feel like this raises as many concerns as it addresses... shouldn’t a properly compositional system be able to do quite a large range of tasks? So for any given task, why would an apparently compositional net fail?

      1 reply 0 retweets 0 likes
    3. Jelle Zuidema‏ @wzuidema Aug 26
      • Report Tweet
      Replying to @beausievers @GaryMarcus

      We looked at a simple arithmetic task with addition, substraction and brackets. Simple, but an infinite domain and clearly compositional. Networks approximate answers within training range almost perfectly, generalize quite well, with errors increasing with length of expressions.

      2 replies 0 retweets 1 like
    4. Gary Marcus‏ @GaryMarcus Aug 26
      • Report Tweet
      Replying to @wzuidema @beausievers

      Isn’t that pretty much as I anticipated? Cc @rgalhama

      1 reply 0 retweets 0 likes
    5. Raquel G. Alhama‏ @rgalhama Aug 26
      • Report Tweet
      Replying to @GaryMarcus @wzuidema @beausievers

      I read the prediction of the '99 ABA/ABB paper (and Algebraic Mind) as anticipating the failure of non-symbolic-NNs in generalizing outside of the training space (i.e. not being able to account for rules/hierarchy/composition)

      2 replies 0 retweets 1 like
    6. Gary Marcus‏ @GaryMarcus Aug 26
      • Report Tweet
      Replying to @rgalhama @wzuidema @beausievers

      And was that prediction correct? For the summary above, it seems like it.

      1 reply 0 retweets 0 likes
    7. Raquel G. Alhama‏ @rgalhama Aug 26
      • Report Tweet
      Replying to @GaryMarcus @wzuidema @beausievers

      It looks to me like our disagreement lies on the difference between "hard" and "impossible".

      1 reply 1 retweet 1 like
    8. Raquel G. Alhama‏ @rgalhama Aug 26
      • Report Tweet
      Replying to @rgalhama @GaryMarcus and

      I am not directly involved in some of the work above so I can't address the details, but the results generally resonate with the lessons I learned when modelling the '99 ABA/ABB task with non-symbolic neural nets: [n/m]

      1 reply 0 retweets 0 likes
    9. Raquel G. Alhama‏ @rgalhama Aug 26
      • Report Tweet
      Replying to @rgalhama @GaryMarcus and

      that `standard' nets (like rnns), used as tabula rasa, would not converge to generalizing solutions akin to the ABA/ABB grammar, allegedly because there is no pressure for these models to generalize outside the training space; but!:

      2 replies 0 retweets 0 likes
    10. Raquel G. Alhama‏ @rgalhama Aug 26
      • Report Tweet
      Replying to @rgalhama @GaryMarcus and

      My take-home is that non-symbolic neural nets *can* converge to generalizing, symbolic-like solutions, but for some tasks, they need to be pushed in the right direction. [m/m]

      1 reply 0 retweets 1 like
      Gary Marcus‏ @GaryMarcus Aug 26
      • Report Tweet
      Replying to @rgalhama @wzuidema @beausievers

      Adding innateness was one of the possiblities I emphasized in the Algebraic Mind and my writings last year. Of course building in a prior for operations over variables would help a lot. I wonder how your work/work in @wzuidema lab squares with failures documented by @LakeBrenden

      8:38 AM - 26 Aug 2019
      • 1 Like
      • Tequehead
      2 replies 0 retweets 1 like
        1. New conversation
        2. Jelle Zuidema‏ @wzuidema Aug 26
          • Report Tweet
          Replying to @GaryMarcus @rgalhama and

          I think our results are consistent with Brenden's, but Brenden et al., at least in the earlier results, (i) emphasized the negative results with NNs, (ii) used training conditions less favorable for generalization to longer expressions.

          1 reply 0 retweets 0 likes
        3. Jelle Zuidema‏ @wzuidema Aug 26
          • Report Tweet
          Replying to @wzuidema @GaryMarcus and

          In particular, in Veldhoen et al (2016) we encourage generalization to longer expressions by withholding some lengths at train time (i.e., we train on length 1,2,4,5,7, and test also on length 3,6,8,9). Alhama & Zuidema (2018, JAIR), we use 'incremental novelty exposure'.

          1 reply 0 retweets 0 likes
        4. 3 more replies
        1. New conversation
        2. Raquel G. Alhama‏ @rgalhama Aug 26
          • Report Tweet
          Replying to @GaryMarcus @wzuidema and

          Agree, though with our work we can't really conclude whether innateness is the answer (what we call pre-wiring could be "pre-learned", i.e. learned with some other source of input prior to the task; we just know it helps if the wiring is there).

          1 reply 0 retweets 2 likes
        3. 1 more reply

      Loading seems to be taking a while.

      Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

        Promoted Tweet

        false

        • © 2019 Twitter
        • About
        • Help Center
        • Terms
        • Privacy policy
        • Imprint
        • Cookies
        • Ads info