Blake Camp

@blake_camp_1

AI PhD Candidate GSU - Duke Alum - ex NYRB - into Human-Level AI, computational neuroscience, futbol, pizza, wine, books, films, markets, family, friends, life

Vrijeme pridruživanja: veljača 2010.

Tweetovi

Blokirali ste korisnika/cu @blake_camp_1

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @blake_camp_1

  1. 2. velj
    Poništi
  2. proslijedio/la je Tweet
    2. velj

    The brain is great at exploiting its high dimensionality to avoid something that a lot of neural networks are bad at, ‘catastrophic forgetting.’ We automatically learn new skills without messing up old ones.

    Poništi
  3. 1. velj

    "Cycle firing encodes hypothetical experience, including multiple possible futures"...awesome

    Poništi
  4. 30. sij

    Disagree, curriculum is a better word for this.

    Poništi
  5. 29. sij

    The senility of our senior citizens is no laughing matter, and in that regard I sympathize. But, this is utter lunacy. Comical, embarrassing, downright weird.

    Poništi
  6. proslijedio/la je Tweet
    29. sij

    Another day at the Lab: “Blitzforschung“ = 5min blackboard presentation of a paper 🤓. Yesterday‘s gem: Backpropamine by , et al. (2019; ICLR). Dopamine-inspired modulation of differential plasticity = Bridging 🌉 timescales in Meta-Learning 🧠

    Poništi
  7. 28. sij
    Poništi
  8. 28. sij

    The word "itself" is bound to show up in ALOT of AI papers this year.

    Poništi
  9. proslijedio/la je Tweet
    28. sij

    Procedural Content Generation via Reinforcement Learning “A new approach to procedural content generation in games, where level design is framed as a game (as a sequential task problem), and the content generator itself is learned.”

    Poništi
  10. proslijedio/la je Tweet

    As far as current machine learning is concerned, generalization originates from the ability to learn the latent manifold on which the training data lies, i.e. the ability to interpolate between training samples (local generalization, by definition)

    Prikaži ovu nit
    Poništi
  11. 21. sij

    been thinking about this for a while, very cool work here from deepmind. we probably shouldn't be waiting to process more information in the forward pass before sending gradients backwards.

    Poništi
  12. 19. sij

    "... generally intelligent learners. Just thinking about environments as something that can be optimized by learning algorithms is interesting and opens many new research directions." (2)

    Prikaži ovu nit
    Poništi
  13. 19. sij

    "In my opinion, a more promising direction is to explicitly optimize environments to be effective for learning, instead of hoping we can create environments that create dynamics that lead to coevolutionary arms races that produce..." (1)

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    27. pro 2019.

    1/ This shows how far the field has regressed in its understanding of probability. It's not a controversial opion, it's the opinion of someone who hasn't understood that a prior over weights in a neural network induces a prior over functions.

    Prikaži ovu nit
    Poništi
  15. 18. sij

    "We show that an –approximate meta-gradient can be computed via implicit MAML using O˜(log(1/)) gradient evaluations and O˜(1) memory, meaning the memory required does not grow with number of gradient steps."

    Prikaži ovu nit
    Poništi
  16. 18. sij

    "Sec-ond, implicit MAML is agnostic to the inner optimization method used, as long as it can find an approximate solution to the inner-level optimization problem."

    Prikaži ovu nit
    Poništi
  17. 18. sij

    "First, the inner optimization path need not be stored nor differentiated through, thereby making implicit MAML memory efficient and scalable to a large number of inner optimization steps."

    Prikaži ovu nit
    Poništi
  18. 18. sij

    This paper is so, so good. Massive implications. Meta-Learning with Implicit Gradients:

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    17. sij
    Odgovor korisnicima i sljedećem broju korisnika:

    But that doesn't mean learning is selected against! Quite the opposite. It means learning species will usually out compete non-learning species, and they will develop more hardwired behavior at a faster rate. So, learning will be strongly selected for.

    Poništi
  20. proslijedio/la je Tweet
    16. sij

    How can we predict and control the collective behaviour of artificial agents? Classical game theory isn't much help when there are >2 agents. In our paper, we find markets impose useful structure on interactions between gradient-based learners:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·