Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @jctestud
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @jctestud
-
Prikvačeni tweet
New blog post about anomaly detection! I am trying to find Golden Retrievers in a celebrity faces dataset using a silly self-supervised task (predicting right from left). Why? let's talk about it in a thread, like the pros do... https://www.elementai.com/news/2019/modern-recipes-for-anomaly-detection … 1/7
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
JC Testud proslijedio/la je Tweet
Our new paper, Deep Learning for Symbolic Mathematics, is now on arXiv https://arxiv.org/abs/1912.01412 We added *a lot* of new results compared to the original submission. With
@f_charton (1/7)pic.twitter.com/GrhQRT5WRW
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
You MUST play AI Dungeon 2, a text adventure game run by a neural net.
@nickwalton00 built it using@OpenAI's huge GPT-2-1.5B model, and it will respond reasonably to just about anything you try. Such as eating the moon. https://aiweirdness.com/post/189511103367/play-ai-dungeon-2-become-a-dragon-eat-the-moon …pic.twitter.com/c4DGJieaAE
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data while learning in a supervised learning manner. My latest post covers many interesting ideas of self-supervised learning tasks on images, videos & control problems:https://lilianweng.github.io/lil-log/2019/11/10/self-supervised-learning.html …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
We release: CamemBERT: a Tasty French Language Model (soon on arxiv) https://camembert-model.fr CamemBERT is trained on 138GB of French text. It establishes a new state of the art in POS tagging, Dependency Parsing and NER, and achieves strong results in NLI. Bon appétit ! [1/3]
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
Glad to share our
#NeurIPS2019 paper on few-shot vid2vid where we address the scalability issue of our#vid2vid. Now, with 1model and as few as 1 example image provided in the test time, we could render the motion of a target subject. https://nvlabs.github.io/few-shot-vid2vid/ … code coming soon.pic.twitter.com/byHICVAk6XPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: https://arxiv.org/abs/1910.10683 Code/models/data/etc: https://git.io/Je0cZ Summary
(1/14)pic.twitter.com/VP1nkkHefB
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
That
@OpenAI Rubik's cube thing was pretty cool and all, but I just successfully installed a brand new faucet for the first time in my kitchen sink, WITHOUT any Bluetooth sensors nor LEDs in my fingertips. Zero-shot learning, y'all!pic.twitter.com/ZeccVrp7dqPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Cool idea! Reminder that you, too, are a couple of commands away from having fun with video generation. Code & tutorial here: https://medium.com/@jctestud/video-generation-with-pix2pix-aed5b1b69f57 …https://twitter.com/ChiMaBa1/status/1182196037148168193 …
0:31Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
Now the real test: having the AI generate text from a *fake* URL. It worked.pic.twitter.com/sdQqbBLUmf
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
A fascinating article by
@lena_voita if you're interested in understanding what makes MLM models like BERT differents from LM models like GPT/GPT-2 (auto-regressive) and MT models. And conveyed in such a beautiful blog post, a master-piece of knowledge sharing!https://twitter.com/lena_voita/status/1173517191586693120 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
He was already on Twitter briefly in the 90's but there was no-one else for him to talk to back then...https://twitter.com/hardmaru/status/1172340623585599489 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
Salesforce releases a 1.6B parameter language model, .1Bs bigger than the current leader. Genuinely cool innovation here in controllable/conditional generation, but I can't help imagine this meeting taking place: blog: https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/ … code: https://github.com/salesforce/ctrl pic.twitter.com/idmMcYOs6Y
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
(The “correct” area of research to watch closely is stupid large self-supervised learning or anything that finetunes on/distills from that. Other “shortcut” solutions prevalent today, while useful, are evolutionary dead ends)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Highly-semantic GAN-based autoencoder reconstructions. What a time to be alive! Awesome work by
@DeepMindAI, based on@ajmooch's#BigGANPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
- Hey
#BigBiGAN, what do you think about this left image? - Encoder: It is a pizza with greens and cheese - Can you draw that kind of pizza from memory? - Decoder: Of course, here we go (draws right image)pic.twitter.com/HgNezppxYx
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
So, at the risk of oversimplifying, when it comes to representation learning, discriminative models lazily learn just enough, self-supervision way more, and GANs even more.https://twitter.com/DeepMind/status/1148154747251253248 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
New blog post: Neural Style Transfer with Adversarially Robust Classifiers I show that adversarial robustness makes neural style transfer work by default on a non-VGG architecture. Blog: https://reiinakano.com/2019/06/21/robust-neural-style-transfer.html … Colab: https://colab.research.google.com/github/reiinakano/adversarially-robust-neural-style-transfer/blob/master/Robust_Neural_Style_Transfer.ipynb …pic.twitter.com/2PQ2RC5sA4
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
JC Testud proslijedio/la je Tweet
Our 2019 Call for Papers is now open! We are seeking abstract submissions on the direct application of statistics, machine learning, deep learning, and data science to the infosec field. Please submit here: https://easychair.org/cfp/camlis2019
#camlis2019Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
