Cody Wild

@decodyng

machine learning research engineer; lover of cats, languages, and elegant systems; explorer & explainer at

Oakland, CA
Vrijeme pridruživanja: veljača 2009.

Tweetovi

Blokirali ste korisnika/cu @decodyng

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @decodyng

  1. Prikvačeni tweet
    7. pro 2019.

    Looking for a way to catch up on paper reading before NeurIPS? I've just finished a month-long writing project where I summarized papers on everything from lottery tickets to model-based reinforcement learning, and collected them all here

    Poništi
  2. 14. pro 2019.

    Honestly a reasonable uncertainty-based exploration strategy. (The kind of reaction you have after spending a day at the Deep RL workshop)

    Poništi
  3. proslijedio/la je Tweet
    13. pro 2019.

    Here is a beautiful drawing of the structure of Radium atom by Niels Bohr in 1922. Niels Bohr won the Nobel Prize in 1922 for his work on the structure of atoms. He described the atom as a small positively charged nucleus surrounded by waves of electrons.

    Poništi
  4. 13. pro 2019.

    I would be very on board with more ML poster intros in this style (seen at )

    Poništi
  5. proslijedio/la je Tweet
    10. pro 2019.

    Want to ensure AI is beneficial for society? Come talk to like-minded people at the Human-Aligned AI Social at , Thursday 7-10 pm, room West 205-207.

    Poništi
  6. proslijedio/la je Tweet

    And today’s all we got So we cannot stop This is our block...

    Poništi
  7. 11. pro 2019.

    A tiny autonomous car making an unsuccessful break for freedom at the poster session.

    Poništi
  8. 11. pro 2019.

    File under "phrases I wasn't expecting to read," also "exceptionally niche gym marketing slogan" (seen at poster session)

    Poništi
  9. 10. pro 2019.

    Oops, we DDOS-ed the poster session

    Poništi
  10. 9. pro 2019.

    If you want to understand this paper on the failure of uniform convergence bounds for deep learning - named best New Directions paper at , I highly recommend this accompanying blog post. It prioritizes clarity over jargon and even has FAQs!

    Poništi
  11. 9. pro 2019.

    Came to the pleasing realization just now that the bus I'm taking from my Airbnb to is literally the Knight Bus

    Poništi
  12. 9. pro 2019.

    "... I spent a lot of time making that chicken" From the tutorial on Imitation Learning for Natural Language Generation at , discussing the difficulty of calculating an expectation over a policy while you're learning that policy

    Poništi
  13. proslijedio/la je Tweet
    9. pro 2019.

    Visual proof that strong quantization doesn’t prevent recognition

    Poništi
  14. 9. pro 2019.

    Q: What does a Portland deep learning researcher love to do when they have a NLP problem? A: Put a BERT on it jokes

    Poništi
  15. 8. pro 2019.

    What starts playing in my head whenever I read a Normalizing Flows paper: problems

    Poništi
  16. 7. pro 2019.

    I'll be in Vancouver for helping present a paper at the DeepRL workshop, taking in as much new information as my brain can handle, & hopefully having some good conversations! I'd love to chat about scientific communication, RL or explainability (among other things!)

    Poništi
  17. proslijedio/la je Tweet
    5. pro 2019.

    Stitchfix algos did white elephant where we had to put a clue on the outside. So I made a black box with weights inside

    Prikaži ovu nit
    Poništi
  18. 29. stu 2019.

    This paper is interesting and provocative, and though I don't think I fully agree with it, it raises interesting questions about the marginal value of parametric models in RL relative to the "empirical model" of a big replay buffer.

    Poništi
  19. 28. stu 2019.

    Are disentangled features useful for future more task performance? This paper tests unsupervised representation learning methods on a simple reasoning task and finds disentanglement correlates with performance.

    Poništi
  20. proslijedio/la je Tweet
    26. stu 2019.

    Single Headed Attention RNN: Stop Thinking With Your Head "The final results are achievable in plus or minus 24 hours on a single GPU as the author is impatient." "Take that Sesame Street." paper: code:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·