Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
Blokirali ste korisnika/cu @QizheXie
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @QizheXie
-
Qizhe Xie proslijedio/la je Tweet
AdvProp: One weird trick to use adversarial examples to reduce overfitting. Key idea is to use two BatchNorms, one for normal examples and another one for adversarial examples. Significant gains on ImageNet and other test sets.https://twitter.com/tanmingxing/status/1199046124348116993 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Can adversarial examples improve image recognition? Check out our recent work: AdvProp, achieving ImageNet top-1 accuracy 85.5% (no extra data) with adversarial examples! Arxiv: https://arxiv.org/abs/1911.09665 Checkpoints: https://git.io/JeopW pic.twitter.com/bAu054LGt2
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
EfficientDet: a new family of efficient object detectors. It is based on EfficientNet, and many times more efficient than state of art models. Link: https://arxiv.org/abs/1911.09070 Code: coming soonpic.twitter.com/2KYabAnpLL
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
*New paper* RandAugment: a new data augmentation. Better & simpler than AutoAugment. Main idea is to select transformations at random, and tune their magnitude. It achieves 85.0% top-1 on ImageNet. Paper: https://arxiv.org/abs/1909.13719 Code: https://git.io/Jeopl pic.twitter.com/equmk59K2i
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Nice new results from
@GoogleAI researchers on improving the state-of-the-art on ImageNet! "We...train a...model on...ImageNet...& use it as a teacher to generate pseudo labels on 300M unlabeled images. We then train a larger...model on the...labeled & pseudo labeled images."https://twitter.com/quocleix/status/1194334947156193280 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
So grateful for all the really useful work you've been releasing, Quoc - I don't know how you do it! :) I love this focus on making the most of data augmentation, pseudo-labeling, and other practically important techniques.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Another view of Noisy Student: semi-supervised learning is great even when labeled data is plentiful! 130M unlabeled images yields 1% gain over previous ImageNet SOTA that uses 3.5B weakly labeled examples! joint work /w
@QizheXie, Ed Hovy,@quocleix https://paperswithcode.com/sota/image-classification-on-imagenet …https://twitter.com/quocleix/status/1194334947156193280 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
"Self-training with Noisy Student improves ImageNet classification" achieves 87.4% top-1 accuracy. 1 Train a model on ImageNet 2 Generate pseudo labels on unlabeled extra dataset 3 Train a student model using all the data and make it a new teacher ->2 https://arxiv.org/abs/1911.04252
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Amazing unsupervised learning results:https://twitter.com/quocleix/status/1194334947156193280 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50
pic.twitter.com/BhwgJvSOYK
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model.pic.twitter.com/eem6tlfyPX
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link: https://arxiv.org/abs/1911.04252 pic.twitter.com/0umSnX7wui
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Glad to see semi-supervised curves surpass/approach their supervised counterparts with much less labeled data! Checkout the
@GoogleAI blogpost on our work on "Unsupervised Data Augmentation (UDA) for consistency training" with code release!@QizheXie@ZihangDai@quocleix https://twitter.com/GoogleAI/status/1149113132083634176 …pic.twitter.com/iNTBWhaTip
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
This work was conducted by
@lmthang,@qizhexie,@ZihangDai, Eduard Hovy, and@quocleix. Get the code here:https://github.com/google-research/uda …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Recent work on "Unsupervised Data Augmentation" (UDA) reveals that better data augmentation leads to better semi-supervised learning, with state-of-the-art results on various language and vision benchmarks, using one or two orders of magnitude less data.https://goo.gle/2G3Tq7u
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Very nice presentation on Unsupervised Data Augmentation (UDA): https://aisc.ai.science/events/2019-07-08/ …. Looking forward to seeing works on applying UDA to achieve SOTA performance on more tasks! Thank you
@gordon_gibson and@AISC_TO for taking the time to chat with me and present it!Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
We opensourced AutoAugment strategy for object detection. This strategy significantly improves detection models in our benchmarks. Please try it on your problems. Code: https://github.com/tensorflow/tpu/tree/master/models/official/detection … Paper: https://arxiv.org/abs/1906.11172 More details & results
https://twitter.com/ekindogus/status/1144093170411511808 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
XLNet: a new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE) arxiv: https://arxiv.org/abs/1906.08237 github (code + pretrained models): https://github.com/zihangdai/xlnet with Zhilin Yang,
@ZihangDai, Yiming Yang, Jaime Carbonell,@rsalakhupic.twitter.com/JboOekUVPQ
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Nice blog post titled "The Quiet Semi-Supervised Revolution" by Vincent Vanhoucke. It discusses two related works by the Google Brain team: Unsupervised Data Augmentation and MixMatch. https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c …pic.twitter.com/bbDxaF6vep
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Qizhe Xie proslijedio/la je Tweet
Data augmentation is often associated with supervised learning. We find *unsupervised* data augmentation works better. It combines well with transfer learning (e.g. BERT) and improves everything when datasets have a small number of labeled examples. Link: http://arxiv.org/abs/1904.12848 https://twitter.com/lmthang/status/1123028716433494016 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.