Jianfeng Gao

@JianfengGao0217

Partner Research Manager in the Deep Learning Group at Microsoft Research AI. IEEE Fellow.

Vrijeme pridruživanja: listopad 2018.

Tweetovi

Blokirali ste korisnika/cu @JianfengGao0217

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @JianfengGao0217

  1. 12. pro 2019.

    Check out my latest article: MSR's new neurosymbolic models learn to encode and process neural symbols via

    Poništi
  2. proslijedio/la je Tweet

    AI has largely moved from symbol-based systems to artificial neural network–based models. TP-Transformer and TP-N2F show how a neurosymbolic approach that merges the two via neural symbols can enhance performance and interpretability:

    Poništi
  3. 7. pro 2019.

    Check out my latest article: DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation via

    Poništi
  4. 7. pro 2019.
    Poništi
  5. 1. lis 2019.

    Check out my latest article: UniLM (v1) is released! via

    Poništi
  6. proslijedio/la je Tweet
    25. ruj 2019.

    Introducing unified Vision-Language Pre-training (VLP)! VLP is pre-trained on millions of image-text pairs and fine-tuned for captioning and VQA. We achieve SotA on COCO (C: 129), VQA 2.0 (Overall 71), all w/ a single model. . Code:

    Poništi
  7. proslijedio/la je Tweet
    15. kol 2019.

    "New State of the Art AI Optimizer: Rectified Adam (RAdam). Improve your AI accuracy instantly versus Adam, & why it works" It's been a long time since we've seen a new optimizer reliably beat the old favorites; this looks like a very encouraging approach!

    Poništi
  8. proslijedio/la je Tweet
    20. lip 2019.

    Congrats to the and AI team for being the first achieve the human performance estimate on the GLUE benchmark.

    Poništi
  9. 18. lip 2019.

    Check out my latest article: The Multi-domain Task Completion track at DSTC8 via

    Poništi
  10. 18. lip 2019.

    Check out my latest article: ConvLab: Multi-Domain End-to-End Dialog System Platform via

    Poništi
  11. proslijedio/la je Tweet

    They say a picture is worth a thousand words. Sure, but the real trick is realizing a bot that draws pictures is using only a dozen. Throw in an ability to visualize an entire story and one day this bot could be working in the movies:

    Poništi
  12. proslijedio/la je Tweet

    Natural language is tied to how humans interact with their environment. Can we build intelligent agents that can learn to communicate in different modalities as do humans? Microsoft researchers are using Vision-Language Navigation to find out:

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    18. lip 2019.

    Our paper has just received Best Student Paper award at . Another fabulous MSR intern work! Congratulations Qiuyuan Huang Lei Zhang . Read the blog post for details!

    Prikaži ovu nit
    Poništi
  14. 17. lip 2019.
    Poništi
  15. 8. lip 2019.

    Check out my latest article: Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading via

    Poništi
  16. 8. lip 2019.

    Check out my latest article: The ICML-2019 tutorial on Conversational AI;​ via

    Poništi
  17. 7. lip 2019.

    Check out my latest article: MT-DNN reaches human performance on General Language Understanding Evaluation (GLUE) via

    Poništi
  18. proslijedio/la je Tweet
    7. lip 2019.

    Excited to announce our work on "Conversing by Reading" . To produce conversation responses that are grounded and contentful, we present a new end-to-end approach to that jointly models response generation and on-demand machine reading. 1/2

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    7. lip 2019.

    We introduce a new large conversation dataset grounded in external web pages (2.8M turns, 7.4M sentences of grounding). Joint work w/ my MSR mentors , Michel Galley, collaborators , , Xiang Gao, Bill Dolan, and my advisor 2/2

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet

    Watch us compress multiple ensembled models into a single Multi-Task Deep Neural Network via knowledge distillation for learning robust and universal text representations across multiple natural language understanding tasks. We're talking SotA in GLUE:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·