Jean-Baptiste Cordonnier

@jb_cordonnier

PhD student 🇨🇭supervised by Prof. Martin Jaggi. Interested in deep learning on graphs, optimization and NLP. Mountain lover 🏔

Vrijeme pridruživanja: studeni 2019.

Tweetovi

Blokirali ste korisnika/cu @jb_cordonnier

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @jb_cordonnier

  1. Prikvačeni tweet
    10. sij

    Very happy to share our latest work accepted at : we prove that a Self-Attention layer can express any CNN layer. 1/5 📄Paper: 🍿Interactive website : 🖥Code: 📝Blog:

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    17. sij

    Our Square Attack achieves the 1st place on the Madry et al. MNIST challenge! Remarkably, this is the *only* black-box attack in the leaderboard and it performs better than all other submitted *white-box* attacks. Code of the attack is publicly available:

    Poništi
  3. 10. sij

    I deeply appreciated the engagement of the reviewers (esp. the critics from Reviewer 3). I am thankful to and Martin Jaggi for their support at . My Ph.D. is funded by , Andreas is supported by . Adis Ababa, here we come! 🇪🇹🎉 5/5

    Prikaži ovu nit
    Poništi
  4. 10. sij

    Interactive website displays attention maps: - some heads ignore content and attend on pixels at *fixed* shifts (confirms theory) sliding a grid-like receptive field, - some heads seem to use content-based attention -> expressive advantage over CNN. 🍿 4/5

    Prikaži ovu nit
    Poništi
  5. 10. sij

    Our work explains the recent success of Transformer architecture applied to vision: Attention Augmented Convolutional Networks. et al., 2019. Stand-Alone Self-Attention in Vision Models. Ramachandran et al., 2019. 3/5

    Prikaži ovu nit
    Poništi
  6. 10. sij

    Two *necessary* conditions (often met in practice): (a) Multiple heads, ex: 3x3 kernel requires 9 heads, (b) Relative positional encoding to allow translation invariance. Each head can attend on pixels at a fixed shift from the query pixel forming the pixel receptive field. 2/5

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    27. pro 2019.

    New work "what graph neural networks cannot learn: depth vs width" (ICLR20) studies the expressive power of GNN: It provides sufficient conditions for universality and shows that many classical problems are impossible when depth x width < c n^p. Blogpost:

    Prikaži ovu nit
    Poništi
  8. We have just launched the AutoTrain challenge at Submit an optimizer achieving target test performance on a wide variety of (unknown) models/tasks without human tweaking 🧙🏻‍♂️ Get started here:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·