Andrew Lampinen

@AndrewLampinen

PhD Candidate, Stanford University. Interested in cognition, artificial intelligence, and transfer.

Vrijeme pridruživanja: studeni 2019.

Tweetovi

Blokirali ste korisnika/cu @AndrewLampinen

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @AndrewLampinen

  1. 1. velj

    A paper supporting our observation () that egocentric perspective improves generalization in RL!

    Poništi
  2. proslijedio/la je Tweet
    27. sij

    Exciting example of the emergentist perspective in the domain of number. Most interesting part: an account of developmental refinement of number sense acuity from experience - this is empirically true but very undertheorized in nativist accounts.

    Poništi
  3. proslijedio/la je Tweet

    We’re honoring of a recipient of a 2020 Troland Research Award! He is being honored for his pioneering studies into children’s early language learning!

    Poništi
  4. proslijedio/la je Tweet
    10. sij

    I'm increasingly asked general questions about language by AI and ML practitioners who are interested to start working on it (or who already do so without considering the domain too much) I find my perspective to be quite different from a lot of NLP researchers and intros. 1/2

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet

    Analyzing toddlers’ early language learning with a novel statistical approach, associate professor of psychology found that rule-based grammatical knowledge emerges gradually with a significant increase around the age of 24 months.

    Poništi
  6. proslijedio/la je Tweet

    A new paper has been making the rounds with the intriguing claim that YouTube has a *de-radicalizing* influence. Having read the paper, I wanted to call it wrong, but that would give the paper too much credit, because it is not even wrong. Let me explain.

    Prikaži ovu nit
    Poništi
  7. 13. pro 2019.
    Prikaži ovu nit
    Poništi
  8. 13. pro 2019.

    Excited to present "Zero-shot task adaptation by homoiconic meta-mapping" at 3:15 tomorrow at the NeurIPS Learning Transferable Skills Workshop! A new perspective on zero-shot adaptation, and a parsimonious implementation of it. Paper and poster in thread!

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    13. pro 2019.

    It is commonly said that models like BERT or GPT-2 don't really 'understand', but what does it actually mean to understand language? We try to answer this via a roadmap for human-like understanding of language in machines.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    11. pro 2019.

    Will be presenting our work this morning: Please drop by if you are at 10:45 AM—12:45 PM poster #152

    Poništi
  11. proslijedio/la je Tweet
    11. pro 2019.

    Many Labs 4 released! led a tested whether original authors’ involvement in design would improve replicability. It is an appealing and plausible idea—they have tacit knowledge that isn’t available by reading the paper.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    9. pro 2019.

    I'm thrilled to be starting my research group in at . Interested in deep learning theory and neuroscience? I'm looking for team members!

    Poništi
  13. proslijedio/la je Tweet
    21. stu 2019.

    Excited to share new work, in collaboration with , investigating the texture bias in ImageNet-trained CNNs: .

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·