Mark

@yieldthought

Principal ML Researcher at Arm. Mark believes in dynamic typing, first-class functions and the immortal essence of the human soul. Tea is nice too.

Lübeck, Germany
Vrijeme pridruživanja: siječanj 2010.

Tweetovi

Blokirali ste korisnika/cu @yieldthought

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @yieldthought

  1. proslijedio/la je Tweet
    26. sij

    I just realized that my new method wasn't working well because of the reason I thought it did.😖 One fundamental component was not doing anything!🤦‍♀️ Now I have to figure out why this thing even works... Then, actually try the idea I wanted to implement. 😬

    Poništi
  2. proslijedio/la je Tweet
    23. sij
    Odgovor korisnicima
    Poništi
  3. proslijedio/la je Tweet
    24. sij

    this motherfucker can’t even make a logo without stealing the damn thing too

    Poništi
  4. proslijedio/la je Tweet
    20. sij

    Given the smoothness of videos, can we learn models more efficiently than with ? We present Sideways - a step towards a high-throughput, approximate backprop that considers the one-way direction of time and pipelines forward and backward passes.

    Poništi
  5. 14. sij
    Poništi
  6. proslijedio/la je Tweet
    12. sij

    The funny thing about AIDungeon is due to all the GPT-2 pre-training on swaths of random web text, you can create an adventure game about literally anything. Here's one about installing OpenCV from source:

    Poništi
  7. proslijedio/la je Tweet

    Sigh. 1.6 Kg of CO2 = 3.5 Kw/h with the US generation mix, so to consume that in a half hour implies that end-to-end streaming consumes 7000 watts, which is off by well over an order of magnitude. We can quibble about accounting with 2x, but 10x is just BS.

    Poništi
  8. proslijedio/la je Tweet
    7. sij

    this is absolutely nuts. The AI GPT-2 has learned to play chess moderately well (able to give bad human amateurs a game) – despite only being a text AI, learning from a corpus of chess notation text, and not having any concept of what a chessboard is

    Prikaži ovu nit
    Poništi
  9. 31. pro 2019.

    Training an agent with access to priviliged ground truth information then using that as a teacher for one that doesn’t: - v nice, reminds me of AlphaStar training with enemy vision and build vector but deploying without either. Want to try it!

    Poništi
  10. proslijedio/la je Tweet
    30. pro 2019.

    I agree with this article that AI Dungeon 2 (the GPT-2 powered text-based game of infinite possibilities) is "one of the coolest video game experiments of 2019". You should definitely check it out: . Great work !

    Poništi
  11. proslijedio/la je Tweet

    In the early hours of Aug. 21, 2017, the USS John S. McCain took an unexpected left turn & collided with a tanker. 10 sailors lost their lives. In an internal investigation, the navy blamed the destroyer’s crew. But that’s not the whole story. Not even close. (THREAD)

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    23. pro 2019.

    Whoa … the stock is so high lol

    Poništi
  13. proslijedio/la je Tweet
    13. pro 2019.
    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet

    I'm sometimes called ”political". But I've never supported any political party, politician or ideology. I communicate the science and the risks of failing to act on it. And the fact that the politics needed don't exist today, neither to the right, left nor center. ->

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    9. pro 2019.

    Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection

    , , i još njih 2
    Poništi
  16. proslijedio/la je Tweet
    5. pro 2019.

    Meet the 'double descent' phenomenon. After we figure it out we should probably rewrite the book chapter on bias-variance tradeoff.

    Poništi
  17. proslijedio/la je Tweet
    3. pro 2019.

    Big announcement - EC2 6th Gen * New Graviton2 CPU * Perf > x86 instances * Kills on price/performance Based on N1 SiArch *First I've seen of N1 in the market (living up to hype) Best coverage from here -

    Poništi
  18. 4. pro 2019.

    😲 I love DeepMind's new open-source policy, even if it was forced for conference acceptance. Reading the (excellent) CURL paper I was very curious how it would work for RL representation learning. The source is all there and works out-of-the-box!

    Poništi
  19. proslijedio/la je Tweet
    2. pro 2019.

    . kicks off our TinyML Application Development for Everyone workshop at .

    , , i još njih 2
    Poništi
  20. proslijedio/la je Tweet
    27. stu 2019.

    Well, I spent much of the last two days reading the 451 pages in these leaked documents on the US-UK trade talks, and I can tell you they clearly show negotiators spending hours on pharmaceutical patents and the NHS. They're on the table, regardless of what Boris Johnson says.

    , , i još njih 6
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·