Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @jb_cordonnier
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @jb_cordonnier
-
Prikvačeni tweet
Very happy to share our latest work accepted at
#ICRL2020: we prove that a Self-Attention layer can express any CNN layer. 1/5
Paper: https://openreview.net/pdf?id=HJlnC1rKPB …
Interactive website : https://epfml.github.io/attention-cnn/
Code: https://github.com/epfml/attention-cnn …
Blog: http://jbcordonnier.com/posts/attention-cnn/ …pic.twitter.com/X1rNS1JvPtPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Jean-Baptiste Cordonnier proslijedio/la je Tweet
Our Square Attack achieves the 1st place on the Madry et al. MNIST challenge! Remarkably, this is the *only* black-box attack in the leaderboard and it performs better than all other submitted *white-box* attacks. Code of the attack is publicly available: https://github.com/max-andr/square-attack/ …pic.twitter.com/36PxL9pyol
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I deeply appreciated the engagement of the reviewers (esp. the critics from Reviewer 3). I am thankful to
@loukasa_tweet and Martin Jaggi for their support at@epfl_en. My Ph.D. is funded by@SDSCdatascience, Andreas is supported by@snsf_ch. Adis Ababa, here we come!
5/5Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Interactive website displays attention maps: - some heads ignore content and attend on pixels at *fixed* shifts (confirms theory) sliding a grid-like receptive field, - some heads seem to use content-based attention -> expressive advantage over CNN.
http://epfml.github.io/attention-cnn 4/5Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Our work explains the recent success of Transformer architecture applied to vision: Attention Augmented Convolutional Networks.
@IrwanBello et al., 2019. https://arxiv.org/abs/1904.09925 Stand-Alone Self-Attention in Vision Models. Ramachandran et al., 2019. https://arxiv.org/abs/1906.05909 3/5Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Two *necessary* conditions (often met in practice): (a) Multiple heads, ex: 3x3 kernel requires 9 heads, (b) Relative positional encoding to allow translation invariance. Each head can attend on pixels at a fixed shift from the query pixel forming the pixel receptive field. 2/5
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Jean-Baptiste Cordonnier proslijedio/la je Tweet
New work "what graph neural networks cannot learn: depth vs width" (ICLR20) studies the expressive power of GNN: It provides sufficient conditions for universality and shows that many classical problems are impossible when depth x width < c n^p. Blogpost: https://andreasloukas.blog/2019/12/27/what-gnn-can-and-cannot-learn/ …pic.twitter.com/VpaQh4fI9Q
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We have just launched the AutoTrain challenge at
@appliedmldays Submit an optimizer achieving target test performance on a wide variety of (unknown) models/tasks without human tweaking
Get started here: http://epfml.github.io/autoTrain/ https://twitter.com/fpedregosa/status/1204247422982950912 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
supervised by Prof. Martin Jaggi. Interested in deep learning on graphs, optimization and NLP. Mountain lover 