Tweetovi

Blokirali ste korisnika/cu @KloudStrife

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @KloudStrife

  1. 2. velj

    Hang on to your models, the new Nvidia GPUs are expected to be 70 to 75% faster than current. Source :

    Poništi
  2. 31. sij

    Towards identifying tree search in rat’s brains ?

    Poništi
  3. proslijedio/la je Tweet
    31. sij

    The Golden Gate Stem Fair is the Bay Area's regional high school science fair. It touches the lives of thousands of local kids and is at risk this year (again) of not happening due to lack of funds. Can we raise $15k to save it? (retweets appreciated!)

    Poništi
  4. proslijedio/la je Tweet
    28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    21. sij
    Odgovor korisniku/ci

    brought to my attention this old gem by It explains backprop using lagrangian formalism, which (more its continuous version) is common in optimal control theory (as discussed in class).

    Poništi
  6. proslijedio/la je Tweet
    24. sij

    Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures. (arXiv:2001.08370v1 [cs.LG])

    Poništi
  7. proslijedio/la je Tweet
    22. sij

    World, please meet the first prototype of my new online, open-access, interactive book: "The Climate Laboratory"

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    23. sij

    Q-learning is difficult to apply when the number of available actions is large. We show that a simple extension based on amortized stochastic search allows Q-learning to scale to high-dimensional discrete, continuous or hybrid action spaces:

    Poništi
  9. proslijedio/la je Tweet
    22. sij

    We are releasing a well-tuned and miniature implementation of Soft Actor-Critic () together with : . We test it on many continuous control tasks from the Control Suite and report the following results:

    Poništi
  10. proslijedio/la je Tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  11. 19. sij

    As ML twitter gets ever more adversarial (and I therefore tweet less and less), let's remind ourselves of that old family saying : 'the critic descends, but the actor ascends.'

    Poništi
  12. proslijedio/la je Tweet
    15. sij

    We worked with to show that distributional RL, a recent development in AI research, can provide insight into previously unexplained elements of dopamine-based learning in the brain. Read the blog: (2/2)

    Prikaži ovu nit
    Poništi
  13. 15. sij

    First artificial creatures (<1mm) from stem cells, based on designs pre-selected by a genetic algorithm.

    Poništi
  14. proslijedio/la je Tweet
    14. sij

    Exciting start of the year for theory of ! SGD on neural nets can: 1) simulate any other learning alg w. some poly-time init [Abbe & Sandon ] 2) learn efficiently hierarchical concept classes [ & Y. Li ]

    Poništi
  15. proslijedio/la je Tweet
    13. sij

    What an elegant idea: Choosing the Sample with Lowest Loss makes SGD Robust "in each step, first choose a set of k samples, then from these choose the one with the smallest current loss, and do an SGD-like update with this chosen sample"

    Poništi
  16. proslijedio/la je Tweet
    13. sij

    Happy to be invited speaker @ Alan Turing institute in the British Institute in London for what seems to be a fantastic event with a great list of speakers "Statistics and Computation" The even is live on

    Poništi
  17. proslijedio/la je Tweet
    10. sij

    A tour de force by Abbe & Sandon, "Any function distribution that can be learned from samples in poly-time can also be learned by a poly-size neural net trained with SGD on a poly-time initialization with poly-steps" + "[this] does not hold for GD"

    Poništi
  18. proslijedio/la je Tweet
    11. sij

    On the Relationship between Self-Attention and Convolutional Layers This work shows that attention layers can perform convolution and that they often learn to do so in practice. They also prove that a self-attention layer is as expressive as a conv layer.

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    8. sij

    Fenchel-Rockafellar duality is a powerful tool that more people should be aware of, especially for RL! Straightforward applications of it enable offpolicy evaluation, offpolicy policy gradient/imitation learning, among others

    Poništi
  20. proslijedio/la je Tweet
    6. sij

    To jump start the new year, a blog post on geometric series.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·