Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @barret_zoph
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @barret_zoph
-
Barret Zoph proslijedio/la je Tweet
RandAugment: Practical automated data augmentation with a reduced search space Decreasing the search space in a clever way avoids the need to perform highly expensive computation search. ie. NAS→EfficientNets, AutoAugment→RandAugment Might be useful for domain randomization.https://twitter.com/barret_zoph/status/1196621040064974849 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Tomorrow I'll be talking about JAX MD: a hardware accelerated, end-to-end differentiable, molecular dynamics library at the ML4PS at 9:20am (along with tons of amazing speakers). Paper: https://arxiv.org/abs/1912.04232 Code: https://github.com/google/jax-md Colab: https://colab.sandbox.google.com/github/google/jax-md/blob/master/notebooks/jax_md_cookbook.ipynb …https://twitter.com/DynamicWebPaige/status/1200607460131688448 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Slides and video of my talk at the Neural Architects workshop at ICCV this year! https://neuralarchitects.org/
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This is a great description of RandAugment! Thanks so much.https://twitter.com/CShorten30/status/1197300422802857987 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
*New paper* RandAugment: a new data augmentation. Better & simpler than AutoAugment. Main idea is to select transformations at random, and tune their magnitude. It achieves 85.0% top-1 on ImageNet. Paper: https://arxiv.org/abs/1909.13719 Code: https://git.io/Jeopl pic.twitter.com/equmk59K2i
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Automatically learned data augmentation policies can train more accurate models using fewer labeled examples, letting you stretch the amount you get from each labeled example. Work by
@GoogleAI's@barret_zoph,@ekindogus, Golnaz Ghiasi, Tsung-Yi Lin, Jonathon Shlens, &@quocleixhttps://twitter.com/ekindogus/status/1144093170411511808 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Data augmentation is even more crucial for detection. We present AutoAugment for object detection, achieving SOTA on COCO validation set (50.7 mAP). Policy transfers to different models & datasets. Paper: https://arxiv.org/abs/1906.11172 , Code: https://github.com/tensorflow/tpu/tree/master/models/official/detection …, details in thread.pic.twitter.com/XZuyMjrxx8
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Nice article in
@cvpr2019 Daily about AutoAugment. Thanks Ralph Anzarouth for the interview! https://www.rsipvision.com/CVPR2019-Tuesday/10/ … Come see our talk [1-1A] and poster [1-1P-12] if you want to learn more.Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Exciting new work on replacing convolutions with self-attention for vision. Our paper shows that full attention is good, but loses a few percents in accuracy. And a middle ground that combines convolutions and self-attention is better. Link: https://arxiv.org/abs/1904.09925 pic.twitter.com/eyVYooN8Va
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Automatic Speech Recognition (ASR) struggles in the absence of an extensive volume of training data. We present SpecAugment, a new approach to augmenting audio data that treats it as a visual problem rather than an audio one. Learn more at → http://goo.gl/KAPS5d pic.twitter.com/8t79Uc7uMU
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Wanted to apply AutoAugment to speech, but a handcrafted augmentation policy already improves SOTA. Idea: randomly drop out certain time & frequency blocks, and warp input spectrogram. Results: state-of-art on LibriSpeech 960h & Switchboard 300h. Link: https://arxiv.org/abs/1904.08779 https://twitter.com/GoogleAI/status/1120387407595986944 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
Latest version of the AutoAugment paper is up: http://arxiv.org/abs/1805.09501 . Stop by our oral presentation at CVPR to learn more! Joint work with
@barret_zoph@decentralion Vijay Vasudevan and@quocleix.pic.twitter.com/bXINM1Vk3i
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Barret Zoph proslijedio/la je Tweet
AutoML for large scale image classification and object detection https://goo.gl/F6QXBd pic.twitter.com/W3JdOozMgw
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

