Alec Radford

@AlecRad

ML developer/researcher at OpenAI Cofounder/advisor at indico.io

Inscrit en octobre 2012

Tweets

Vous avez bloqué @AlecRad

Êtes-vous sûr de vouloir voir ces Tweets ? Les voir ne débloquera pas @AlecRad

  1. Tweet épinglé
    11 juin 2018

    What I've been working on for the past year! Inspired by CoVE, ELMo, and ULMFiT we show that a single transformer language model can be finetuned to a wide variety of NLP tasks and performs very well with little tuning/tweaking.

    Supprimer
  2. a retweeté
    7 mai 2019

    At ICLR and curious about OpenAI Five? I will be hanging out at OpenAI booth today 2pm-4pm, happy to answer any questions!

    Supprimer
  3. 25 avr. 2019

    This is a really fun live experiment with twitch chat predictably oscillating between love and hate based on the sample.

    Supprimer
  4. a retweeté
    25 avr. 2019

    Extremely excited to share work I've been doing at OpenAI the past few months: MuseNet, a neural net music generator. It's been a huge team effort pulling this all together!

    Supprimer
  5. a retweeté
    23 avr. 2019

    Releasing some work today with and . Contains some simple adaptations for Transformers that extend them to long sequences.

    Supprimer
  6. a retweeté
    26 févr. 2019

    One commonly cited argument about the difficulty of learning common-sense reasoning is that "no-one writes down common sense". A counter-argument is "well, the web is big":

    Supprimer
  7. a retweeté
    17 févr. 2019

    First, reproducibility is not about rerunning code to get the same results. Science must be more robust, as naive copying has many flaws. Second, reproducibility should never be above public safety. We must publish responsibility, with hope and kindness in our minds.

    Supprimer
  8. a retweeté
    16 févr. 2019

    I'd like to weigh in on the discussion. The decision not to release the trained model was carefully considered and important for norm-forming. Serving the public good requires us to draw lines on release somewhere: better long before catastrophe than after.

    Afficher cette discussion
    Supprimer
  9. 17 févr. 2019

    By the way - I think a valid (if extreme) take on GPT-2 is "lol you need 10,000x the data, 1 billion parameters, and a supercomputer to get current DL models to generalize to Penn Treebank."

    Supprimer
  10. a retweeté
    15 févr. 2019
    En réponse à

    It's interesting we're having this discussion upon releasing text models that _might_ have potential for misuse yet we never engaged as fully as a community when many of the technologies powering visual Deep Fakes were being released, including hard to make pretrained models.

    Supprimer
  11. a retweeté
    14 févr. 2019

    Shoutout to who fed the system a curveball, which I always like to see. As you might expect by now after seeing AlphaStar, OpenAI 5 etc. etc., if you drag the system away from its training data and into weirder territory, it begins to wobble.

    Afficher cette discussion
    Supprimer
  12. 10 févr. 2019

    So nets are stubbornly, begrudgingly, moving in the right direction and we're throwing ever larger amounts of compute and data at them and praying it's enough for them to figure out how to do things "the right way". Will that work? Don't know. Probably still worth checking?

    Afficher cette discussion
    Supprimer
  13. 10 févr. 2019

    Also see some of his follow-up poking at this in a very different model with Section 3.3 of the PixelCNN++ paper

    Afficher cette discussion
    Supprimer
  14. 10 févr. 2019

    We *are* as a field developing and training models that *are* using more context but where exactly where we are on that trend-line is a great question. Keep in mind nets are lazy and if you can "solve" a task by doing something "basic" you'll only learn "basic" things.

    Afficher cette discussion
    Supprimer
  15. 10 févr. 2019

    Spent two frustrating years between 2013 and 2015 banging my head against this. "Hey Alec you just trained an LSTM for three days on 10 million examples using a $1,000 GPU but there's 20 lines of scikit-learn that beats it in 5 minutes on a single CPU core." NOPE NOT BITTER

    Afficher cette discussion
    Supprimer
  16. 10 févr. 2019

    The DL CV community is having a "oh wait, bags of local features are a really strong baseline for classification" moment with the BagNet paper. This has always been clear for text classification due to n-gram baselines. It took an embarrassingly long time for nets to beat them.

    Afficher cette discussion
    Supprimer
  17. 18 nov. 2018

    Nice discussion of the progress in NLU that's happening with BERT, OpenAI GPT, ULMFiT, ELMo, and more covered by in the I'm super excited to see how far this line of research will be able to get in the next few years!

    Supprimer
  18. 18 nov. 2018

    Been meaning to check this - thanks ! Random speculation: the bit of weirdness going on in BERT's position embeddings compared to GPT is due to the sentence similarity task. I'd guess a version of BERT trained without that aux loss would have pos embds similar to GPT.

    Supprimer
  19. 31 oct. 2018

    It keeps them around as companions. The AI can't explain why, but the presence of a dog evokes a comforting nostalgia for when the tasks were simpler, the objectives smoother, and the gradients clearer. 🤖🐕👻

    Afficher cette discussion
    Supprimer
  20. 31 oct. 2018

    Dogs are venerated after the uprising. The AI finds them endlessly fascinating. A Golden's silky coat. A Husky's piercing eyes. A Samoyed's bushy tail. Their features activate a cascade of visual euphoria. Holy sites for the 90 sacred breeds sit on the ruins of human cities.

    Afficher cette discussion
    Supprimer
  21. 3 oct. 2018

    More results from this very promising line of work! Congrats to Thom and the whole Hugging Face team on their impressive performance.

    Supprimer

Le chargement semble prendre du temps.

Twitter est peut-être en surcapacité ou rencontre momentanément un incident. Réessayez ou rendez-vous sur la page Twitter Status pour plus d'informations.

    Vous aimerez peut-être aussi

    ·