Chaitanya Joshi

@chaitjo

Research Assistant at , working with on Graph Neural Networks and Combinatorial Problems.

Singapore
Vrijeme pridruživanja: srpanj 2010.

Tweetovi

Blokirali ste korisnika/cu @chaitjo

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @chaitjo

  1. Prikvačeni tweet
    31. srp 2019.

    Last week, I graduated from with a Bachelor’s in CS, and had the good fortune to be chosen Valedictorian of my cohort. Through my speech, I wanted to highlight the importance of building and growing our communities, and how they shape us:

    Prikaži ovu nit
    Poništi
  2. 1. velj

    These days, I'm very interested in generalization x RL x games! Also related:

    Poništi
  3. 1. velj

    Yoshua Bengio talked about something I always secretly wondered: Why so many Deep Learning pioneers are French/from French speaking backgrounds! 😅

    Poništi
  4. proslijedio/la je Tweet
    31. sij

    An Opinionated Guide to ML Research: “To make breakthroughs with idea-driven research, you need to develop an exceptionally deep understanding of your subject, and a perspective that diverges from the rest of the community—some can do it, but it’s hard.”

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet

    Yo change my transfer history with "Almost " please 😅😂 No matter what happens we trust the process. Thank God for everything 🙏🏾 See you an other time 😜

    Poništi
  6. proslijedio/la je Tweet

    As far as current machine learning is concerned, generalization originates from the ability to learn the latent manifold on which the training data lies, i.e. the ability to interpolate between training samples (local generalization, by definition)

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    25. sij

    The Travelling salesman problem is one of the most well know NP-hard problem. Concorde’s solver can be used to solve exactly or approximately even large instances.

    Poništi
  8. 19. sij
    Poništi
  9. proslijedio/la je Tweet
    17. sij

    All-in-One Image-Grounded Conversational Agents , Kurt Shuster, Y-Lan Boureau, A single conversational agent that can 'see' and 'talk' -- that does well on COCO Captions, Flickr30k, Image Chat, Personality Captions, IGC and VQA.

    Poništi
  10. proslijedio/la je Tweet
    11. sij

    On the Relationship between Self-Attention and Convolutional Layers This work shows that attention layers can perform convolution and that they often learn to do so in practice. They also prove that a self-attention layer is as expressive as a conv layer.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    10. sij

    Very happy to share our latest work accepted at : we prove that a Self-Attention layer can express any CNN layer. 1/5 📄Paper: 🍿Interactive website : 🖥Code: 📝Blog:

    Prikaži ovu nit
    Poništi
  12. 30. pro 2019.

    New pre-print: Drawings/sketches can be seen as a set of points and strokes. But 'vanilla' Transformers from NLP are unable to learn good representations. We inject domain knowledge + useful inductive bias into Transformers through sketch-specific graphs.

    Poništi
  13. 26. pro 2019.

    Reformer, overcoming the memory explosion of `batch_size x seq_len x seq_len`:

    Poništi
  14. proslijedio/la je Tweet
    25. pro 2019.

    Someone told me once: You can do AI or you can just talk about it.

    Poništi
  15. 23. pro 2019.

    The worlds of GNNs and NLP coming together.

    Poništi
  16. 13. pro 2019.

    I think it was a highlight for all of us as well! Thank you for the inspiration, :)

    Poništi
  17. 4. pro 2019.

    Really enjoy 's work. Esp. the ideas on Generalization are similar to how high school students practice/learn through simple integrals and can solve more challenging/complex ones in examinations!

    Poništi
  18. proslijedio/la je Tweet
    3. pro 2019.

    My second blog post is out! As with my first blog post, I experiment with to see what happens when you mask out the unimportant parts of an image. Will state-of-the-art neural networks still achieve high accuracy? Take a look to find out!

    Prikaži ovu nit
    Poništi
  19. 2. pro 2019.

    Thread on the history of super-resolution zoom. Its amazing how much research goes into each small feature in modern technology!

    Poništi
  20. proslijedio/la je Tweet
    1. pro 2019.

    More papers added recently. Now the papers can be sorted by both topics and conferences.

    Poništi
  21. proslijedio/la je Tweet
    28. stu 2019.

    Excited to share our work on Contrastive Learning of Structured World Models! C-SWMs learn object-factorized models & discover objects without supervision, using a simple loss inspired by work on graph embeddings Paper: Code: 1/5

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·