Tweetovi

Blokirali ste korisnika/cu @msk33001507

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @msk33001507

  1. proslijedio/la je Tweet
    28. sij

    Check out Meena, a new state-of-the-art open-domain conversational agent, released along with a new evaluation metric, the Sensibleness and Specificity Average, which captures basic, but important attributes for normal conversation. Learn more below!

    Prikaži ovu nit
    Poništi
  2. 12. sij
    Poništi
  3. proslijedio/la je Tweet
    26. stu 2019.

    Introducing the SHA-RNN :) - Read alternative history as a research genre - Learn of the terrifying tokenization attack that leaves language models perplexed - Get near SotA results on enwik8 in hours on a lone GPU No Sesame Street or Transformers allowed.

    The SHA-RNN is composed of an RNN, pointer based attention, and a “Boom” feed-forward with a sprinkling of layer normalization. The persistent state is the RNN’s hidden state h as well as the memory M concatenated from previous memories. Bake at 200◦F for 16 to 20 hours in a desktop sized oven.
    The attention mechanism within the SHA-RNN is highly computationally efficient. The only matrix multiplication acts on the query. The A block represents scaled dot product attention, a vector-vector operation. The operators {qs, ks, vs} are vectorvector multiplications and thus have minimal overhead. We use a sigmoid to produce {qs, ks}. For vs see Section 6.4.
    Bits Per Character (BPC) onenwik8. The single attention SHA-LSTM has an attention head on the second last layer and hadbatch size 16 due to lower memory use. Directly comparing the head count for LSTM models and Transformer models obviously doesn’tmake sense but neither does comparing zero-headed LSTMs against bajillion headed models and then declaring an entire species dead.
    Poništi
  4. proslijedio/la je Tweet
    4. sij

    My Top 10 Tweets of the Year A thread...

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    2. sij

    If you're learning Deep Learning - this is one of the best lists of (free) tutorials I've seen. Work through this list! It's even organized by subject!

    Poništi
  6. proslijedio/la je Tweet
    21. stu 2019.

    Here are some notes from a recent talk I gave in Berkeley about entrepreneurship. It was well received, so I thought I'd share here. Enjoy! *Choose a North Star to guide your life*

    Poništi
  7. proslijedio/la je Tweet

    When someone’s arguing passionately for a position, ask them if they can explain the best argument for the other side (and then give them the option to say why their side is better) Their ability to steel-man both sides of an argument determines the integrity of their position

    Poništi
  8. proslijedio/la je Tweet
    21. stu 2019.

    This feels like a real breakthrough: Take the same basic algorithm as AlphaZero, but now *learning* its own simulator. Beautiful, elegant approach to model-based RL. ... AND ALSO STATE OF THE ART RESULTS! Well done to the team at

    Poništi
  9. proslijedio/la je Tweet
    22. lis 2019.

    This is a penetrating depiction of one of the major directional arrows of progress of our time and should be required reading/listening. on 'working like a lion' in the industrial age: "We live in an age of infinite leverage...

    Prikaži ovu nit
    Poništi
  10. 8. lis 2019.
    Poništi
  11. 26. kol 2016.

    VMock - Improve your Resume through Instant Automated Feedback..Try now

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·