Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @D_Berthelot_ML
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @D_Berthelot_ML
-
Prikvačeni tweet
FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). https://arxiv.org/abs/2001.07685 Collaboration with Kihyuk Sohn,
@chunliang_tw@ZizhaoZhang Nicholas Carlini@ekindogus@Han_Zhang_@colinraffelpic.twitter.com/BmeYvpEHzX
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
A well done video explanation of FixMatch, thanks
@CShorten30 !https://twitter.com/CShorten30/status/1220478729593466885 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
When I invented adversarial training as a defense against adversarial examples, I focused on making it as cheap and scalable as possible. Eric and collaborators have now upgraded the original cheap version to compete with newer, more expensive versions.https://twitter.com/RICEric22/status/1217459930954981376 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Code is up: https://github.com/google-research/fixmatch … And being my usual distracted self, I forgot one co-author from the list:
@alexey2004 (Sorry Alex!) The code for ImageNet will come later.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Surprisingly even with 1 example per class, results better than previously possible with 25 (before MixMatch) are achievable. On CIFAR10, with a single example per class FixMatch obtains between 48.58% and 85.32% test accuracy with a median of 64.28%.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Just saw Neon AI from Samsung: the avatars look amazing. Trying to find information: someone mentioned they were actors representing the potential, journalist pieces sound like it's entirely rendered. It's like looking for a needle of information in a haystack of marketing...
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
"Happiness is like a rose by itself in the garden; when it blooms, the birds will sing and the bees will make honey." -- Anonymous AI
#GPT2Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
David Berthelot proslijedio/la je Tweet
Yes! I got my first big conference paper accepted at ICLR, with spotlight! We improve the previous DeepMind paper "NALU" by 3x-20x. – This took 7-8 months, working without any funding as an independent researcher. Paper: https://openreview.net/forum?id=H1gNOeHKPS … Code: https://github.com/AndreasMadsen/stable-nalu …pic.twitter.com/7tBivzbyir
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
StyleGAN2 is out. https://arxiv.org/abs/1912.04958
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
In case you missed our
#neurips poster on MixMatch (https://arxiv.org/abs/1905.02249 ) today because you aren't in Vancouver or didn't survive the poster session stampede, here's the PDF: https://github.com/google-research/mixmatch/blob/master/PDF/MixMatch%20NeurIPS%202019%20poster.pdf … and here's a transcript of what I said to everyone who came by:
1/11Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
For those who could not unfortunately make it to NeurIPS, the poster for our paper adversarial mixup resynthesis (aka autoencoders with mixup) is available online athttps://postersession.ai/poster/on-adversarial-mixup-resynthesis/ …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
HYPE is a
#NeurIPS2019 Oral!! We evaluate {
GANs,
datasets,
sampling methods}. WE show statistically insignificant correlation with FID et al.
Fun fact: different GANs excel at different classes in ImageNet, e.g
vs.
Get HYPE https://hype.stanford.edu Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
I will be at
@NeurIPSConf next week to present MixMatch [1] and give a T5 demo [2]! Please get in touch if you want to discuss research, eat vegan food, and/or go bouldering. [1] https://arxiv.org/abs/1905.02249 [2] https://arxiv.org/abs/1910.10683Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
I made a small package which allows reading tfrecord files in PyTorch with no tf dependency:https://github.com/vahidk/tfrecord
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
I'm starting a professorship in the CS department at UNC in fall 2020 (!!) and am hiring students! If you're interested in doing a PhD
@unccs please get in touch. More info here: https://cs.unc.edu/admissions/graduate/graduate-programs/ …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
David Berthelot proslijedio/la je Tweet
@Han_Zhang_ has new research (w/@ZizhaoZhang,@honglaklee and me) that sets a new state-of-the art in conditional image synthesis by using consistency regularization on GANs: https://arxiv.org/abs/1910.12027 . Thread follows:pic.twitter.com/FLjLk4R4TL
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: https://arxiv.org/abs/1910.10683 Code/models/data/etc: https://git.io/Je0cZ Summary
(1/14)pic.twitter.com/VP1nkkHefB
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
David Berthelot proslijedio/la je Tweet
Our work exploring the use of learned optimizers to make more robust image models is on arXiv! We find that in some cases learned optimizers are capable of learning more robustness image classifiers! https://arxiv.org/abs/1906.03367 pic.twitter.com/W1S0DOoWVj
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data while learning in a supervised learning manner. My latest post covers many interesting ideas of self-supervised learning tasks on images, videos & control problems: