Gary Marcus

@GaryMarcus

"If right doesn't matter, we're lost. If the truth doesn't matter, we're lost."

Vrijeme pridruživanja: prosinac 2010.

Medijski sadržaj

  1. prije 7 sati
    Odgovor korisnicima

    Meena has exactly the same core issue as ELIZA: it doesn't build a model of what it or the interlocutor has said, and it often contradicts what happened a few turns earlier. Topic without understanding in 1965, topic without understanding in 2020. Here's a sample:

  2. 26. sij

    to defend 's pretends DL is only 10 years old! Perceptrons date to 1958; DL to 1967 & "logical argument” below grossly misrepresents my position, which says we need *much more* than just hybrid architecture. from : 1/2

    Prikaži ovu nit
  3. 24. sij
  4. 13. sij

    Coming Soon-ish: The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence First draft almost done!

  5. 11. sij
    Odgovor korisnicima

    i think people can actually represent fragments of FOL (or something close to it) just fine, up to limits on memory. look at my slide in debate for example; given a moments’ background, the universally quantified sentence in top right can be easily represented:

  6. 7. sij

    Warning: slightly out of control narrative about GPT-2 & chess spreading rapidly! - system has not properly induced rules; serious problems sticking to legal moves - relies heavily on large opening library - failures in keeping track of state akin to my NeurIPS talk (ex. below)

  7. 6. sij

    Hinton has not really articulated his arguments, so much as ridiculed his opposition, not really allowing any quarter for prior value, eg::

  8. 3. sij
    Odgovor korisnicima

    didn’t say it was surprising; said it was huge: direct confirmation of something Christof Koch among others had speculated for a long time. the world now is full of stuff like the below (emphasize on “sums”) assuming this paper is replicated & extended, thinking will change.

  9. 2. sij

    Perceptive comments on yesterday's Bengio-Marcus dialog, from the computer vision research Yiannis Aloimonos, judging the outcome a tie and suggesting value in meeting in the middle.

  10. 2. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    i was specifically responding to this ref of yours to mouse:

  11. 30. pro 2019.

    this is my world: a reporter asks Hinton about symbolic hybrids, Hinton likens them to a dying relic I then strenuously defend position that such hybrids are still worth considering and then? people attack me for suggesting that anyone might have a bias against symbols. 🤷🏻‍♂️

  12. 30. pro 2019.

    As this decade comes to a close, and apropos the , I want to repeat one of the first things that I wrote in the last decade, because looking more broadly than we have looked, still strikes me as the best way forward:

  13. 30. pro 2019.
    Odgovor korisniku/ci

    see other reply; you are misinterpreting me. if you want to attack fine, but don’t attack me where i am asking people to nominate their favorite alternatives:

  14. 30. pro 2019.

    sorry but no; you are distorting things. i said shame on you for trying to stifle a conversation about alternatives by attacking me & trying to divert conversation from a productive discussion of alternatives that many contributed to. compare your tweet to the query i posed below

  15. 30. pro 2019.
    Odgovor korisnicima

    what do you think of yoshua’s definition here? doesn’t it subsume your approach into deep learning, if you accept the definition?

  16. 29. pro 2019.
    Odgovor korisnicima

    yep people differ in what world models they build. but they don't differ in *whether* they build world models. we all do it every time we read a book or an article or watch a story. GPT-2 never does, not in the sense my kids do. which is why it struggles with stuff like this:

  17. 29. pro 2019.

    Lost in the debate: it may have sounded like I think hybrid models are sufficient for general AI. I don't. Hybrids are necessary; not sufficient. Rx given in is also about eg building knowledge & frameworks for things like space, time & causality:

  18. 29. pro 2019.

    "Deep learning est mort": Jan 2018 blog quoting that partly resembles Bengio's recent definition of deep learning but instead dubs it "differentiable programming" I prefer 's framing We still need a separate way of talking about current models & their limits

    Prikaži ovu nit
  19. 29. pro 2019.

    🙏 Just wanted to say how much I appreciate all the private notes of support that many people have been sending me. These notes give me strength. 🙏

  20. 28. pro 2019.
    Odgovor korisniku/ci

    hybrid AI was an important strand of the book but we argue that it is necessary but not sufficient. our overall recipe:

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·