Brenden Lake

@LakeBrenden

Assistant Professor of Psychology and Data Science @ NYU. Research Scientist @ Facebook AI Research.

Vrijeme pridruživanja: lipanj 2018.

Tweetovi

Blokirali ste korisnika/cu @LakeBrenden

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @LakeBrenden

  1. proslijedio/la je Tweet
    27. sij

    Well, for Spring 2020 there are 90 registered students in the NYU Computational Cognitive Modeling graduate class with and a waitlist of 42. 🙃

    Poništi
  2. proslijedio/la je Tweet
    24. sij

    Facebook's AI Residency program, which gives residents 12 months of practical experience in AI working with leading researchers, is accepting applications until January 31st! Current residents Diana and Eric discuss their experience in this Q&A:

    Poništi
  3. 18. sij

    A disturbing use of facial recognition tech from a start-up in lower Manhattan. My recommendation: replace your profile photos with adversarial images

    Poništi
  4. 14. sij

    The source code for meta seq2seq learning is now available through . You can reproduce the experiments from my paper, or run memory-based meta learning on other seq2seq problems.

    Poništi
  5. 5. sij

    "The number of tech jobs in New York City has surged 80 percent in the past decade, to 142,600, from 79,400 in 2009"

    Poništi
  6. 3. sij

    The Frostbite scores in our paper were Josh Tenenbaum and . My score, in fact, remains unpublished :)

    Poništi
  7. proslijedio/la je Tweet
    3. sij

    Submissions for our workshop on "Bridging AI and Cognitive Science" at are now open! Submit your recent work, work in progress, or controversial opinions at the intersection of AI and cognitive science.

    Poništi
  8. proslijedio/la je Tweet
    26. pro 2019.

    Bayesian methods are *especially* compelling for deep neural networks. The key distinguishing property of a Bayesian approach is marginalization instead of optimization, not the prior, or Bayes rule. This difference will be greatest for underspecified models like DNNs. 1/18

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    16. pro 2019.

    Our theme issue is out: If you are interested in compositionality from formal, computational, or neurophysiological/neurobiological perspectives with a mechanistic bent, eat your heart out! It was an honour to edit this together with the inimitable Giosuè Baggio .

    Prikaži ovu nit
    Poništi
  10. 18. pro 2019.

    "Systematic generalization" and "compositionality" were buzzwords at . At the meta-learning workshop, I discussed compositionality in human language and thought, and how understanding it would inform machine intelligence. Video begins 26:35

    Poništi
  11. 17. pro 2019.

    I hear “It’s all just experience” used to justify training algorithms for thousands of years worth of experience (often on just one game or task). But let’s not confuse this with human-like learning, and let's not dismiss fundamental questions about the origins of knowledge!(2/2)

    Prikaži ovu nit
    Poništi
  12. 17. pro 2019.

    It’s misleading to lump together nature and nurture as just “experience.” The extent to which capabilities are built-in vs. learned is foundational in cognitive science and cognitive development, because it suggests different representations, architectures, algorithms, etc. (1/2)

    Prikaži ovu nit
    Poništi
  13. 14. pro 2019.

    Maxwell Nye talking about learning compositional rules via neural program synthesis, in context and compositionality workshop

    Poništi
  14. proslijedio/la je Tweet
    14. pro 2019.

    Our workshop on Context & Composition in Biological and Artificial Neural Systems just started! Come hear perspectives from folks like &

    Prikaži ovu nit
    Poništi
  15. 13. pro 2019.

    I'm speaking on "Compositional generalization in minds and machines" at 5 today in the meta learning workshop (West Ballroom B). Will cover SCAN benchmark, human experiments, and compositional generalization through "meta seq2seq learning"

    Poništi
  16. 12. pro 2019.

    Here's a link to the progress report: Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2019). The Omniglot challenge: a 3-year progress report. Current Opinion in Behavioral Sciences, 29, 97-104.

    Prikaži ovu nit
    Poništi
  17. 12. pro 2019.

    Our "progress report" on the Omniglot Challenge was featured in this year's AI index report: "Achieving human-level concept learning will require learning richer representations from less data, and reconfiguring these representations to tackle new tasks.” (see pgs 64 and 223)

    Prikaži ovu nit
    Poništi
  18. 12. pro 2019.

    The best part of conferences is catching up with friends like these two. We should dig up a picture from the 2010 machine learning summer school in Sardinia, where we really got to know each other

    Poništi
  19. 11. pro 2019.

    Neural nets struggle with systematic generalizaton, but can be improved through "meta seq2seq learning": training on many seq2seq problems to acquire the compositional skills needed for solving new problems. Come by poster 178 Thu. at 10:45

    Poništi
  20. proslijedio/la je Tweet
    10. pro 2019.

    Mini thread: If you haven't already 's beautiful & insightful paper on intelligence & AI, you should. An elegant distillation of where we are now, & an intriguing proposal for how to make progress.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·