Tweets
- Tweets, page courante.
- Tweets & réponses
- Médias
Vous avez bloqué @AlecRad
Êtes-vous sûr de vouloir voir ces Tweets ? Les voir ne débloquera pas @AlecRad
-
Tweet épinglé
What I've been working on for the past year! https://blog.openai.com/p/7fa97c36-6111-4997-b690-741916793b23/ … Inspired by CoVE, ELMo, and ULMFiT we show that a single transformer language model can be finetuned to a wide variety of NLP tasks and performs very well with little tuning/tweaking.
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
At ICLR and curious about OpenAI Five? I will be hanging out at OpenAI booth today 2pm-4pm, happy to answer any questions!
#ICLR2019#openaifiveMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
This is a really fun live experiment with twitch chat predictably oscillating between love and hate based on the sample.pic.twitter.com/5XPCPcEcrx
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
Extremely excited to share work I've been doing at OpenAI the past few months: MuseNet, a neural net music generator. It's been a huge team effort pulling this all together!https://twitter.com/OpenAI/status/1121457782312460288 …
0:41Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
Releasing some work today with
@scottgray76@AlecRad and@ilyasut. Contains some simple adaptations for Transformers that extend them to long sequences.https://twitter.com/OpenAI/status/1120719459977584641 …
0:26Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
One commonly cited argument about the difficulty of learning common-sense reasoning is that "no-one writes down common sense". A counter-argument is "well, the web is big": https://www.instructables.com/id/How-To-Open-A-Door-1/ …pic.twitter.com/2c721qlTlW
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
First, reproducibility is not about rerunning code to get the same results. Science must be more robust, as naive copying has many flaws. Second, reproducibility should never be above public safety. We must publish responsibility, with hope and kindness in our minds.https://twitter.com/volokuleshov/status/1096904440051785728 …
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
I'd like to weigh in on the
#GPT2 discussion. The decision not to release the trained model was carefully considered and important for norm-forming. Serving the public good requires us to draw lines on release somewhere: better long before catastrophe than after.Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
By the way - I think a valid (if extreme) take on GPT-2 is "lol you need 10,000x the data, 1 billion parameters, and a supercomputer to get current DL models to generalize to Penn Treebank."
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
It's interesting we're having this discussion upon releasing text models that _might_ have potential for misuse yet we never engaged as fully as a community when many of the technologies powering visual Deep Fakes were being released, including hard to make pretrained models.
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Alec Radford a retweeté
Shoutout to
@katyanna_q who fed the system a curveball, which I always like to see. As you might expect by now after seeing AlphaStar, OpenAI 5 etc. etc., if you drag the system away from its training data and into weirder territory, it begins to wobble. https://www.theregister.co.uk/2019/02/14/open_ai_language_bot/ …pic.twitter.com/gUUXFgiQ3z
Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
So nets are stubbornly, begrudgingly, moving in the right direction and we're throwing ever larger amounts of compute and data at them and praying it's enough for them to figure out how to do things "the right way". Will that work? Don't know. Probably still worth checking?
Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Also see some of his follow-up poking at this in a very different model with Section 3.3 of the PixelCNN++ paperhttps://arxiv.org/abs/1701.05517
Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
We *are* as a field developing and training models that *are* using more context but where exactly where we are on that trend-line is a great question. Keep in mind nets are lazy and if you can "solve" a task by doing something "basic" you'll only learn "basic" things.
Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Spent two frustrating years between 2013 and 2015 banging my head against this. "Hey Alec you just trained an LSTM for three days on 10 million examples using a $1,000 GPU but there's 20 lines of scikit-learn that beats it in 5 minutes on a single CPU core." NOPE NOT BITTER
Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
The DL CV community is having a "oh wait, bags of local features are a really strong baseline for classification" moment with the BagNet paper. This has always been clear for text classification due to n-gram baselines. It took an embarrassingly long time for nets to beat them.
Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Nice discussion of the progress in NLU that's happening with BERT, OpenAI GPT, ULMFiT, ELMo, and more covered by
@CadeMetz in the@nytimes I'm super excited to see how far this line of research will be able to get in the next few years!https://www.nytimes.com/2018/11/18/technology/artificial-intelligence-language.html …Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Been meaning to check this - thanks
@Thom_Wolf ! Random speculation: the bit of weirdness going on in BERT's position embeddings compared to GPT is due to the sentence similarity task. I'd guess a version of BERT trained without that aux loss would have pos embds similar to GPT.https://twitter.com/Thom_Wolf/status/1064278042225385472 …
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
It keeps them around as companions. The AI can't explain why, but the presence of a dog evokes a comforting nostalgia for when the tasks were simpler, the objectives smoother, and the gradients clearer.
#HappyHalloween

Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
Dogs are venerated after the uprising. The AI finds them endlessly fascinating. A Golden's silky coat. A Husky's piercing eyes. A Samoyed's bushy tail. Their features activate a cascade of visual euphoria. Holy sites for the 90 sacred breeds sit on the ruins of human cities.
Afficher cette discussionMerci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer -
More results from this very promising line of work! Congrats to Thom and the whole Hugging Face team on their impressive performance.https://twitter.com/Thom_Wolf/status/1047402381212901376 …
Merci. Twitter en tiendra compte pour améliorer votre fil. SupprimerSupprimer
Le chargement semble prendre du temps.
Twitter est peut-être en surcapacité ou rencontre momentanément un incident. Réessayez ou rendez-vous sur la page Twitter Status pour plus d'informations.
Let me add some food for thoughts with the same plots for the positional embeddings of