Brian Cheung

@thisismyhat

This is my hat, there are many like it, but this one is mine.・PhD Student at

Vrijeme pridruživanja: lipanj 2015.

Tweetovi

Blokirali ste korisnika/cu @thisismyhat

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @thisismyhat

  1. Prikvačeni tweet
    15. velj 2019.

    It turns out you can fit a lot more than one model in one set of parameters...and train them independently. Work with A. Terekhov, Y. Chen, , and B. Olshausen

    Poništi
  2. proslijedio/la je Tweet
    27. pro 2019.

    I'm curious. If you're in machine learning, do you know what the Levels of Analysis are? If so, do you think they are important or relevant for research today in machine learning?

    Prikaži ovu nit
    Poništi
  3. 23. pro 2019.

    Code to reproduce our NeurIPS 2019 "Superposition of many models into one" () now available:

    Poništi
  4. 22. pro 2019.

    There's a long list of people I know that have left the field/quit their Ph.D. after trying to reproduce 'mindshare'.

    Poništi
  5. 7. pro 2019.

    One sentence summary of all 1427 NeurIPS 2019 papers.

    Poništi
  6. 18. stu 2019.

    aka physical neurons?

    Prikaži ovu nit
    Poništi
  7. 18. stu 2019.
    Prikaži ovu nit
    Poništi
  8. 18. stu 2019.
    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet

    Our weekly CompNeuro journal club is back up and running for the quarter. Today, we discussed Cheung et al’s preprint "Superposition of many models into one" from and . Led by AJ Kruse and Satpreet Singh.

    Poništi
  10. 11. tra 2019.

    Glad to see our work making it to modern times . 👏 to for the clean Lasagne/Theano example back in 2015.

    Poništi
  11. proslijedio/la je Tweet
    15. velj 2019.

    Is it to possible to store multiple neural network models within a *single* set of parameters? Yes, No .. maybe? checkout --

    Poništi
  12. 31. sij 2019.

    Finally, a definition of disentangling that isn't purely statistical... (Higgins, Amos, , et. al.)

    Poništi
  13. 7. sij 2019.
    Poništi
  14. 19. pro 2018.

    Amazing how far adding a little bit of structure goes: "from the previous state-of-the-art of 22% to an unprecedented 74%"

    Poništi
  15. 21. stu 2018.

    When authors fail to do a proper literature review:

    Poništi
  16. 17. stu 2018.
    Poništi
  17. 12. stu 2018.

    2012: Using GPUs for Deep Learning 2018: Using Deep Learning for GPUs "Differentiable Monte Carlo Ray Tracing through Edge Sampling"

    Poništi
  18. 2. kol 2018.

    OpenAI Five till Sunday:

    Poništi
  19. 3. srp 2018.

    Halfway through 2018, still my favorite lecture. One takeaway: Fish matrix: the matrix...but for fish.

    Poništi
  20. 14. lip 2018.

    What I imagine most of my parameters are doing when training a (generative) adversarial network:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·