Ludovic Denoyer

@LudovicDenoyer

Research Scientist at Facebook - FAIR Paris - previously Professor at Sorbonne Université

Vrijeme pridruživanja: travanj 2014.

Tweetovi

Blokirali ste korisnika/cu @LudovicDenoyer

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @LudovicDenoyer

  1. proslijedio/la je Tweet
    11. pro 2019.

    Come discuss our work on Unsupervised Object Segmentation at this morning’s poster session (number 84) ! Work done with my supervisors Thierry Artieres and .

    Poništi
  2. proslijedio/la je Tweet
    27. lis 2019.
    Odgovor korisniku/ci

    In the tradition of Drew McDermott's "How Intelligent is Deep Blue?"--that I keep making my Intro students read..

    Poništi
  3. proslijedio/la je Tweet
    23. lis 2019.

    Last Thursday has defended his thesis : « Unsupervised Machine Translation » Supervised by and Marc Ranzato Congratulations !!!

    Prikaži ovu nit
    Poništi
  4. 23. lis 2019.
    Poništi
  5. proslijedio/la je Tweet
    21. ruj 2019.

    The “Research in Brief” blog post on our ACL paper on unsupervised QA is up! Work with and - check out the full paper here:

    Poništi
  6. proslijedio/la je Tweet

    Facebook AI is releasing code for a self-supervised technique that uses AI-generated questions to train systems, avoiding the need for labeled question answering training data.

    Poništi
  7. 13. ruj 2019.

    Can we learn persons specific language models that evolve through time ? Check our last article: "Learning Dynamic Author Representations with Temporal Language Models" with and S. Lamprier (ICDM) at

    Poništi
  8. proslijedio/la je Tweet
    10. ruj 2019.

    Continuation of the starting year saga advertising our recent publication achievements. (2/2) "Learning Dynamic Author Representations with Temporal Language Models" by , , Congrats to all authors!!!!

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    3. ruj 2019.

    In the second we show that adding a Product-Key Memory Layer in a transformer is as efficient as doubling the number of layers in terms of performance, and has no impact on running time. with Marc'Aurelio Ranzato (2/3)

    Prikaži ovu nit
    Poništi
  10. 3. ruj 2019.

    Congrats !!! Now, next step is finishing your thesis manuscript 😉😁

    Poništi
  11. proslijedio/la je Tweet
    2. kol 2019.
    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    31. srp 2019.

    You can actually train a model for SQuAD without training data!! 🤯 Patrick Lewis () on Unsupervised Question Answering by Cloze Translation Hall 4

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    30. srp 2019.

    Come to see Patrick’s work on QA without QA supervision. Definitely worth finishing your lunch in time!

    Poništi
  14. proslijedio/la je Tweet
    12. srp 2019.

    Our new paper: Large Memory Layers with Product Keys We created a key-value memory layer that can increase model capacity for a negligible computational cost. A 12-layer transformer with a memory outperforms a 24-layer transformer, and is 2x faster! 1/2

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    19. lip 2019.

    First talk : Image generative modeling for design inspiration and image editing by Camille Couprie, Research Scientist

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet

    Nous sommes ravis d’accueillir la 17ème rencontre de l’association Women in Machine Learning & Data Science. Cette association favorise la participation des femmes et des minorités de genre qui pratiquent et étudient l'apprentissage machine et les data science.

    Poništi
  17. 13. lip 2019.

    Can we train Question Answering models without a QA training set ? Congrats for this paper !

    Poništi
  18. proslijedio/la je Tweet
    6. lip 2019.

    Is CycleGAN implementing an Optimal Transport (OT) Plan between domains? New work of Emmanuel de Bézenac, Ibrahim Ayed and Patrick Gallinari from MLIA, analyzing Unsupervised Domain Translation under the framework of OT. arxiv:

    Prikaži ovu nit
    Poništi
  19. 6. lip 2019.

    For those interested by this line of research, please consider other papers by who is a very great PhD student: * - Multi-view Generative Adversarial Networks * - Multi-View Data Generation Without View Supervision

    Prikaži ovu nit
    Poništi
  20. 5. lip 2019.

    Can we learn to detect objects without any supervision? Yes, if we assume that an object is a part of an image that can be redrawn while keeping the image realistic. With and Thierry Artieres -

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·