Medijski sadržaj
- Tweetovi
- Tweetovi i odgovori
- Medijski sadržaj, trenutna stranica.
-
I had another conversation with Meena just now. It's not as funny and I don't understand the first answer. But the replies to the next two questions are quite funny.pic.twitter.com/lpOZpsvDck
Prikaži ovu nit -
My favorite conversation is below. The Hayvard pun was funny but I totally missed the steer joke at the end until it was pointed out today by
@Blonkhartpic.twitter.com/AmTobwf9A0
Prikaži ovu nit -
New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: https://arxiv.org/abs/2001.09977 Blog: https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html …pic.twitter.com/5SOBa58qx3
Prikaži ovu nit -
AdvProp improves accuracy for a wide range of image models, from small to large. But the improvement seems bigger when the model is larger.pic.twitter.com/13scFaoQzB
Prikaži ovu nit -
-
-
EfficientDet: a new family of efficient object detectors. It is based on EfficientNet, and many times more efficient than state of art models. Link: https://arxiv.org/abs/1911.09070 Code: coming soonpic.twitter.com/2KYabAnpLL
Prikaži ovu nit -
I also highly recommend this nice video that explains the paper very well:https://www.youtube.com/watch?v=Y8YaU9mv_us …
Prikaži ovu nit -
Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50
pic.twitter.com/BhwgJvSOYK
Prikaži ovu nit -
Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model.pic.twitter.com/eem6tlfyPX
Prikaži ovu nit -
Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link: https://arxiv.org/abs/1911.04252 pic.twitter.com/0umSnX7wui
Prikaži ovu nit -
We released all checkpoints and training recipes of EfficientNets, including the best model EfficientNet-B7 that achieves accuracy of 84.5% top-1 on ImageNet. Link: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet …pic.twitter.com/vT7UojqOc0
Prikaži ovu nit -
XLNet: a new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE) arxiv: https://arxiv.org/abs/1906.08237 github (code + pretrained models): https://github.com/zihangdai/xlnet with Zhilin Yang,
@ZihangDai, Yiming Yang, Jaime Carbonell,@rsalakhupic.twitter.com/JboOekUVPQ
-
EfficientNets: a family of more efficient & accurate image classification models. Found by architecture search and scaled up by one weird trick. Link: https://arxiv.org/abs/1905.11946 Github: https://bit.ly/30UojnC Blog: https://bit.ly/2JKY3qt pic.twitter.com/RIwvhCBA8x
-
Key idea in these two papers is to ensure prediction(x) = prediction(x + noise) , where x is an unlabeled example. People have tried all kind of noise, e.g., Gaussian noise, adversarial noise etc. But it looks like data augmentation noise is the real winner.pic.twitter.com/ClUOqbvEsj
Prikaži ovu nit -
Nice blog post titled "The Quiet Semi-Supervised Revolution" by Vincent Vanhoucke. It discusses two related works by the Google Brain team: Unsupervised Data Augmentation and MixMatch. https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c …pic.twitter.com/bbDxaF6vep
Prikaži ovu nit -
Introducing MobileNetV3: Based on MNASNet, found by architecture search, we applied additional methods to go even further (quantization friendly SqueezeExcite & Swish + NetAdapt + Compact layers). Result: 2x faster and more accurate than MobileNetV2. Link: https://arxiv.org/abs/1905.02244 pic.twitter.com/jEFBeA67sR
-
I don't understand your question well, but the method is used in a semi-supervised learning where you have two losses. One is supervised loss and the other is unsupervised consistency loss. The figure below may help.pic.twitter.com/oq7DZldYut
-
Exciting new work on replacing convolutions with self-attention for vision. Our paper shows that full attention is good, but loses a few percents in accuracy. And a middle ground that combines convolutions and self-attention is better. Link: https://arxiv.org/abs/1904.09925 pic.twitter.com/eyVYooN8Va
-
We used architecture search to find a better architecture for object detection. Results: Better and faster architectures than Mask-RCNN, FPN and SSD architectures. Architecture also looks unexpected and pretty funky. Link: https://arxiv.org/abs/1904.07392 pic.twitter.com/00KKNlnybv
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.