Daniel Adiwardana

@xpearhead

Researching conversational AI at Google Brain Team .

Vrijeme pridruživanja: svibanj 2009.

Tweetovi

Blokirali ste korisnika/cu @xpearhead

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @xpearhead

  1. Prikvačeni tweet
    28. sij

    Enabling people to converse with chatbots about anything has been a passion of a lifetime for me, and I'm sure of others as well. So I'm very thankful to be able to finally share our results with you all. Hopefully, this will help inform efforts in the area. (1/4)

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    29. sij

    This video explains 's amazing new Meena chatbot! An Evolved Transformer with 2.6B parameters on 341 GB / 40B words of conversation data to achieves remarkable chatbot performance! "Horses go to Hayvard!"

    Poništi
  3. proslijedio/la je Tweet
    29. sij

    Had the chance to sit next to Daniel in the early days of the project and tried out the interactive Meena. It has always been *this* surprising and funny :) BIG Congrats to the team with this publication. The possibilities to build up from here is endless.

    Poništi
  4. proslijedio/la je Tweet
    28. sij

    Meena: new SOTA chatbot from us. One big step towards human-like conversation AI. Look forward to many applications related to that, e.g. 7/24 AI based foreign language tutoring.

    Poništi
  5. proslijedio/la je Tweet
    28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    28. sij

    Open-domain conversation is an extremely difficult task for ML systems. Meena is a research effort at in this area. It's challenging, but we are making progress towards more fluent and sensible conversations. Nice work, Daniel, & everyone involved!

    Poništi
  7. 28. sij

    Bonus: Meena often seems to put together ideas in ways that we don't manage to find matches of in the data. For example saying that "Horses go to Hayvard" in conversation we show in the blog post .

    Prikaži ovu nit
    Poništi
  8. 28. sij

    "It was trained on movie subtitles?!" I told myself and others in awe. Maybe the potential for generalization was really there. I was truly blessed to be able to later work with and many others on giving continuity to this idea, and turning it into . (4/4)

    Prikaži ovu nit
    Poništi
  9. 28. sij

    One day, I came across the paper A Neural Conversational Model () by and . The paper showed sample conversations with an end-to-end learned neural network. (3/4)

    Prikaži ovu nit
    Poništi
  10. 28. sij

    When I was about 9 years old my father taught me how to program, and, to my delight, we built a chatbot. Initially, I couldn't stop working on it, but I no matter how many rules I wrote and how much knowledge I tried to add to its database, it still wasn't what I expected. (2/4)

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    30. srp 2019.

    (1/4) Learning ML engineering is a long slog even for legendary hackers like . IMO, the two hardest parts of ML eng are: 1) Feedback loops are measured in minutes or days in ML (compared to seconds in normal eng) 2) Errors are often silent in ML

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    19. lip 2019.

    XLNet: a new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE) arxiv: github (code + pretrained models): with Zhilin Yang, , Yiming Yang, Jaime Carbonell,

    Poništi
  13. proslijedio/la je Tweet

    An interesting trend from this year's CVPR are the numerous new papers on self-supervised learning. Andrew Zisserman gave a nice tutorial: although, there is a lot more geometry-related work as well (e.g. self-supervised depth & friends).

    Poništi
  14. proslijedio/la je Tweet

    Honored to talk w . His courses were my intro to the field & I wouldn't be here w/o his clear & inspiring teaching! I think of these, the courses, and the scholars/fellows, as the 3 essential steps I've taken in this wild journey

    Poništi
  15. proslijedio/la je Tweet
    15. svi 2019.

    Translatotron is our experimental model for direct end-to-end speech-to-speech translation, which demonstrates the potential for improved translation efficiency, fewer errors, and better handling of proper nouns. Learn all about it below!

    Poništi
  16. proslijedio/la je Tweet
    1. svi 2019.

    Really cool application of a differentiable approximation to nearest neighbours (as in e.g. NCA): aligning videos without any supervision.

    Poništi
  17. proslijedio/la je Tweet

    New blog post: "A Recipe for Training Neural Networks" a collection of attempted advice for training neural nets with a focus on how to structure that process over time

    Poništi
  18. proslijedio/la je Tweet
    6. tra 2019.

    The reason most (not all) methods don't add value (over baseline) when scaled is because they're "extra training data in disguise", so their benefit vanishes in the high data regime

    Poništi
  19. proslijedio/la je Tweet
    28. ožu 2019.

    Very first tweet by after getting Turing Award "Thanks to my graduate students and postdocs whose work won a Turing award. Thanks to my visionary mentors Inman Harvey, David Rumelhart and Terry Sejnowski... " What a humble person! Very few of us would do the same.

    Poništi
  20. proslijedio/la je Tweet

    The toronto brain team celebrated Geoff's Turing award yesterday. We got two cakes, one said Hinton, the other said Turing, that way we could decide which was better.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·