Adam Santoro

@santoroAI

Research Scientist in artificial intelligence at DeepMind

Vrijeme pridruživanja: svibanj 2016.

Tweetovi

Blokirali ste korisnika/cu @santoroAI

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @santoroAI

  1. proslijedio/la je Tweet
    1. velj

    A paper supporting our observation () that egocentric perspective improves generalization in RL!

    Poništi
  2. 17. sij

    Very nice paper! Multi-modal alignment allows for unsupervised discovery of concepts/labels/meaning. IMO these ideas will eventually allow language models to exhibit deep "understanding" of text, and will solve the "lack of common sense" problem

    Poništi
  3. proslijedio/la je Tweet
    17. sij

    New paper w , "Learning as the Unsupervised Alignment of Conceptual Systems". Supervised learning tasks can be solved by purely unsupervised means by exploiting correspondences across systems (e.g., text, images, etc.). 1/5

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    15. sij

    When neuroscience and AI researchers get to chatting, cool stuff happens! My first, and I hope not last, trip into neuroscience has been published in Nature. 1/

    Prikaži ovu nit
    Poništi
  5. 10. sij

    This logic is especially grating for studies proposing new functions/computations: "We think brain area X might be doing Y. BTW, we can totally imagine how Y would be selected for in evolution. Therefore it's even more probable that X does Y."

    Prikaži ovu nit
    Poništi
  6. 10. sij

    Proving a trait is beneficial (or worse, hypothesizing that it *could* be beneficial) is not sufficient evidence that it is an adaptation. Too many papers throw in statements speculating adaptations, and I don't know why. Especially papers that have nothing to do with evolution

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    10. sij

    I'm increasingly asked general questions about language by AI and ML practitioners who are interested to start working on it (or who already do so without considering the domain too much) I find my perspective to be quite different from a lot of NLP researchers and intros. 1/2

    Prikaži ovu nit
    Poništi
  8. 4. sij
    Poništi
  9. proslijedio/la je Tweet
    29. pro 2019.

    Some people in ML share the illusion that models expressed symbolically will necessarily/magically generalise better compared to, for example, parametric model families fit on the same data. This belief seems to come from a naive understanding of mathematics 1/5

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    25. pro 2019.

    Rephrasing with my own words: DL is a collection of tools to build complex modular differentiable functions. These tools are devoid of meaning, it is pointless to discuss what DL can or cannot do. What gives meaning to it is how it is trained and how the data is fed to it

    Prikaži ovu nit
    Poništi
  11. 23. pro 2019.

    Every time I watch a debate I end up asking myself why I ever bother to watch debates

    Poništi
  12. 18. pro 2019.

    Hmm, about that whole "compositionality" thing...🧐

    Poništi
  13. 15. pro 2019.

    I'd argue it's more than this, since we shouldn't assume a child is only taking advantage of a single ancestral line. There's massive parallelism in the population of ancestors that either directly or indirectly influence a single person's genome

    Poništi
  14. 13. pro 2019.

    I think the ability will emerge when models perform *many* tasks, live in a rich world with ample opportunity for grounding of many types, and consistently use analogy. And I'm fully aware of the irony saying this after having built a dataset to test systematic generalization 🙃

    Prikaži ovu nit
    Poništi
  15. 13. pro 2019.

    IMO models will not display systematic generalization when trained on small, hand-crafted datasets designed to measure systematic generalization, unless we build tons of non-general priors. 1/2

    Prikaži ovu nit
    Poništi
  16. 13. pro 2019.

    Had the pleasure of reading a draft of this, and I cannot recommend it enough. Please read it if you're at all interested in the role of language in intelligence

    Poništi
  17. 12. pro 2019.

    Arguing against the repurposing hypothesis, wherein conceptual structuring uses navigation machinery: "we suggest there are no intrinsic ‘place’ or ‘grid’ cells, but instead a flexible system that will represent the relevant variables at hand, including physical space"

    Poništi
  18. 10. pro 2019.

    "The term biological plausibility should be dropped and instead researchers should clearly state the relevant datasets they intend to address with their theory or model. " I think this exact logic should also apply to AI research and the notion of models' "understanding"

    Prikaži ovu nit
    Poništi
  19. 10. pro 2019.

    Love it --> "To be charitable, when neuroscientists claim biological plausibility it is possible they are quietly entertaining some empirical finding that is consistent with their preferred model rather than a vague unsubstantiated intuition."

    Prikaži ovu nit
    Poništi
  20. 10. pro 2019.

    "Although there can be a tendency[...]to dismiss higher-level explanations[...]as biological implausible or not real[...]this makes as much sense as stating that a sorting algorithm, a car's engine, the heart, or the hippocampus is not real because it can be further decomposed."

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·