Prakhar Agarwal

@prakharaga19

Machine Learning/AI. Bioinformatics. Currently in 3rd year . Have interned at DRDO and goIbibo

Lucknow, India
Vrijeme pridruživanja: srpanj 2011.

Tweetovi

Blokirali ste korisnika/cu @prakharaga19

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @prakharaga19

  1. 2. velj

    Voice-controlled Harry Potter's "Marauder's Map". Such things make the web awesome. 🦄

    Poništi
  2. proslijedio/la je Tweet
    28. sij

    me, browsing fleabay: oh neat cheap ns delay line module. sure why not. wonder how it works *it arrives* cool let's check it out! ... i'm not sure what i expected, lmfao

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    25. sij
    Poništi
  4. proslijedio/la je Tweet
    27. ožu 2018.

    Not everything was perfect in 1995, but I think we've lost something on the way. Some remarks: 1) Underlined letters indicate keyboard shortcuts. How handy! 2) Design clearly says: "We are buttons!". It's easy to find the wanted button, because they have some color. 1/3

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    Poništi
  6. proslijedio/la je Tweet
    19. pro 2019.

    Today was a special day. And it goes well beyond CAA, NRC, BJP or even ideologies. Delhi stood up. Youth gathered. Everyone cared. A generation horrified of its reputation just showed the world it's other side. All it needed was a purpose.

    Poništi
  7. 16. pro 2019.
    Poništi
  8. proslijedio/la je Tweet
    16. pro 2019.

    Nice list of references on using deep nets and models of brain and behavior, specifically using ConvNets as models of the visual cortex.

    Poništi
  9. proslijedio/la je Tweet

    We gave everyone at Sticker Mule a $1,000 bonus!

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    5. pro 2019.

    A surprising deep learning mystery: Contrary to conventional wisdom, performance of unregularized CNNs, ResNets, and transformers is non-monotonic: improves, then gets worse, then improves again with increasing model size, data size, or training time.

    Poništi
  11. proslijedio/la je Tweet
    4. pro 2019.

    Our new paper, Deep Learning for Symbolic Mathematics, is now on arXiv We added *a lot* of new results compared to the original submission. With (1/7)

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    2. pro 2019.
    Poništi
  13. proslijedio/la je Tweet
    2. pro 2019.
    Poništi
  14. 2. pro 2019.
    Poništi
  15. proslijedio/la je Tweet
    26. stu 2019.

    Introducing the SHA-RNN :) - Read alternative history as a research genre - Learn of the terrifying tokenization attack that leaves language models perplexed - Get near SotA results on enwik8 in hours on a lone GPU No Sesame Street or Transformers allowed.

    The SHA-RNN is composed of an RNN, pointer based attention, and a “Boom” feed-forward with a sprinkling of layer normalization. The persistent state is the RNN’s hidden state h as well as the memory M concatenated from previous memories. Bake at 200◦F for 16 to 20 hours in a desktop sized oven.
    The attention mechanism within the SHA-RNN is highly computationally efficient. The only matrix multiplication acts on the query. The A block represents scaled dot product attention, a vector-vector operation. The operators {qs, ks, vs} are vectorvector multiplications and thus have minimal overhead. We use a sigmoid to produce {qs, ks}. For vs see Section 6.4.
    Bits Per Character (BPC) onenwik8. The single attention SHA-LSTM has an attention head on the second last layer and hadbatch size 16 due to lower memory use. Directly comparing the head count for LSTM models and Transformer models obviously doesn’tmake sense but neither does comparing zero-headed LSTMs against bajillion headed models and then declaring an entire species dead.
    Poništi
  16. 17. stu 2019.

    Man pages so unhelpful that there's even an auto-generator for them.

    Poništi
  17. 17. stu 2019.

    Some very cool stuff. Just look at the table of contents 🤩. Bonus: It comes with exercises

    Poništi
  18. proslijedio/la je Tweet
    15. stu 2019.
    Poništi
  19. 9. stu 2019.
    Poništi
  20. proslijedio/la je Tweet

    I've just released a fairly lengthy paper on defining & measuring intelligence, as well as a new AI evaluation dataset, the "Abstraction and Reasoning Corpus". I've been working on this for the past 2 years, on & off. Paper: ARC:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·