Roman Rumin

@trurom

Specialist Degree in Software and Information System Administration / Institute of Mathematics, Economics and Informatics of Irkutsk State University (IMEI ISU)

Vrijeme pridruživanja: veljača 2011.

Tweetovi

Blokirali ste korisnika/cu @trurom

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @trurom

  1. proslijedio/la je Tweet
    21. sij

    Outstanding talk by furture of . Most definitely unlocking secrets to how are brains 🧠work.

    Poništi
  2. 19. sij

    "Transformer-XH: Multi-hop question answering with eXtra Hop attention" is an interesting work.

    Poništi
  3. proslijedio/la je Tweet

    How does deep learning perform DEEP learning? Microsoft and CMU researchers establish a principle called "backward feature correction" and explain how very deep neural networks can actually perform DEEP hierarchical learning efficiently:

    Poništi
  4. proslijedio/la je Tweet

    A simpler, flatter neural network, closer to actual brain architecture, can produce robust performance compared to deeper, more complex networks.

    Poništi
  5. 11. sij

    Also, NNs can dramatically reduce the cost of producing chips. Due to the assumption that a NN that can generalize well, hence can adapt to manufacturing process imperfections. Also, good neural networks would be able to solve synchronization issues themselves. Etc. 3/3

    Prikaži ovu nit
    Poništi
  6. 11. sij

    Also, we should not worry about some noise. NN itself should be general enough to manage noise. The noise will force generalization. There is also no need for RAM at all. It is cheaper to use several chips without RAM than one chip with energy-hungry RAM. 2/2

    Prikaži ovu nit
    Poništi
  7. 11. sij

    Excellent! I am also a fan of the analog approach. There is no need for analog-digital conversions, if we will make 1-bit precision chips. And use huge gated huge buses instead of addressing. If the chip is cold enough, then several or more layers of connectivity can be made. 1/2

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    20. lis 2019.

    ProxylessNAS is available on Pytorch Hub. It takes only two lines of code to use: The search code is also open sourced on github.

    Poništi
  9. proslijedio/la je Tweet
    31. lis 2019.

    Our work on Visual Wake Words Challenge is highlighted by Google. The technique we used is

    Poništi
  10. proslijedio/la je Tweet
    9. lis 2019.

    “The MIT-IBM researchers designed a temporal shift module, which gives the model a sense of time passing w/out explicitly representing it. In tests, the method was able to train the deep-learning, video recognition AI 3x faster than existing methods.”

    Poništi
  11. proslijedio/la je Tweet
    22. stu 2019.

    Our approach incrementally learns a mixture latent space, incorporating dynamic expansion to capture new concepts, and mixture generative replay to avoid forgetting previous ones. Work by

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    27. stu 2019.

    Humans perform “mental time travel” across memories for goal-directed decisions. Our new algorithm, also based on episodic memory retrieval, enables AI agents to perform long-term credit assignment. Paper: Code:

    Poništi
  13. 6. sij

    I like the way Yoshua Bengio (and others) do the data processing, they take a few steps with RIMs instead of one simple pass. They still continue to explore this idea, it will be very interesting to look at the continuation.

    Poništi
  14. proslijedio/la je Tweet
    29. srp 2019.

    Our paper 𝗛𝗼𝗹𝗼𝗚𝗔𝗡 was accepted to ! We show that HoloGAN automatically learns a 𝗱𝗶𝘀𝗲𝗻𝘁𝗮𝗻𝗴𝗹𝗲𝗱 3D representation from natural images. NO pose labels, NO 3D shapes, NO multiple views, ONLY 2D images! Video:

    Poništi
  15. proslijedio/la je Tweet
    31. lis 2019.

    Meta Reinforcement Learning is good at adaptation to very similar environments. But can we meta-learn general RL algorithms? Our new approach MetaGenRL is able to. With and Paper: Blog:

    Poništi
  16. proslijedio/la je Tweet
    4. pro 2019.

    We introduce Dreamer, an RL agent that solves long-horizon tasks from images purely by latent imagination inside a world model. Dreamer improves over existing methods across 20 tasks. paper code Thread 👇

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    13. pro 2019.

    Dota 2 with Large Scale Deep Reinforcement Learning Via

    Poništi
  18. proslijedio/la je Tweet

    Really proud to see published in . Playing the full game of StarCraft II with a pro-approved interface, the system ranked higher than 99.8% of all players – a fantastic achievement! Read our paper here:

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    15. lis 2019.
    Odgovor korisniku/ci

    Sometimes we were unaware that our robot is partially broken because the neural network could compensate for it. The model worked just fine with broken fingers or defected sensors.

    Poništi
  20. proslijedio/la je Tweet
    23. lis 2019.

    Cool to see the discussion of our multiagent work at the top of /r/programming:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·