Dileep George

@dileeplearning

Founder at Vicarious AI ( ). Previously Founder and CTO at Numenta. Triply EE (BTech IIT-Mumbai, MS&PhD Stanford)

Vrijeme pridruživanja: lipanj 2017.

Medijski sadržaj

  1. 2. velj
    Odgovor korisniku/ci

    take on the trolley problem.

  2. 30. sij
    Odgovor korisniku/ci

    Really nice! For your module on generative models, I would like to suggest our RCN paper, which has exactly this kind of shape-object factorization and is an example of a neuroscience inspired predictive-coding model that works in practice.

  3. 28. sij
    Odgovor korisniku/ci

    Why do you consider that image interpretation be non-compositional? It is context-sensitive and compositional , and we did tackle that problem in our RCN paper. See Fig 1C(i) where interpretation switches between 'm' and 'un', and (iii) 'bison' and 'bike'

  4. 27. sij
    Odgovor korisnicima

    Yes, that you don't know until you do, and you won't know unless you try. Btw, this was an article in NYT in 1903 about how far away flight is....'Flying machines that do not fly', and it is true that at that point we didn't know and it looked very far, but was just weeks away!

  5. 25. sij
    Odgovor korisnicima

    It is right on top as the premise of the article. :). Makes me not want to read it because it is obviously incorrect.

  6. 21. sij
    Odgovor korisnicima

    The irony is that "flapping is not required to fly" was something Wright brothers learned from observing birds! They kept detailed notes about birds soaring for long distance with no movement of their wings, except for the slight twist of ends for control.

  7. 4. sij

    Very nice blog. I think neocortex uses a dozen or so general-purpose priors that help with learning and inference, and understanding how to encode those priors is an important challenge for AI...as I put it here:

  8. 2. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    This snippet from your paper is interesting... see our recent work on cognitive maps where spatial representations are formed from pure sequence learning...

  9. 2. sij

    Yoshua wants a new term for 'specific forms of deep learning', (which includes causality, reasoning etc?) Looks like he has almost spelled it out: Deep Learning 2.0

  10. 2. sij
  11. 1. sij

    wishes you a Happy New Year, really!

  12. 26. pro 2019.
    Odgovor korisniku/ci

    Lee & Mumford paper is a good start. . Our RCN paper is an example of instantiating those ideas in a model. Also, see cognitive programs where visual cortex is not a passive module, but part of interactive querying of the world.

  13. 24. pro 2019.
    Odgovor korisnicima i sljedećem broju korisnika:

    hey, but it is "evolving", are you sure PGMs won't be included in the future? Also, if PGMs weren't included Geoff could have titled this differently no?

  14. 24. pro 2019.

    is proud to have predicted this slide from Yoshua. Just user our flow chart instead of a wall of text....

  15. 24. pro 2019.
    Odgovor korisnicima

    Of course not, because it is predicted here :)

  16. 16. pro 2019.
  17. 16. pro 2019.
    Odgovor korisnicima

    I liked the content except for the fact that *systematically* avoids citing relevant work. Multiple researchers have worked on concepts as program learning. Read our blog and paper to see the similarities to our recent work:

  18. 15. pro 2019.
    Odgovor korisniku/ci

    Have you seen this paper? I can confirm that it is hard to stay out of the extremes in this :)

  19. 14. pro 2019.

    Actually I take that comment back because look how cute those agents are....awww

  20. 11. pro 2019.

    When I am as successful as Yoshua Bengio, I will write a paper with the title 'The Pretentious Prior' ...and kinda get away with it :)

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·