Tweetovi

Blokirali ste korisnika/cu @iamandrewdai

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @iamandrewdai

  1. proslijedio/la je Tweet

    Another talk from our team at My teammates Dale Webster and talking about the challenges in building for real-world use, sharing experience beyond model build/evaluation, into integration in real products, real workflows, for real people

    Prikaži ovu nit
    Poništi
  2. 13. pro 2019.

    Come see our team's work on medical records: doctor's notes, uncertainty, EHR graphical structure, clinical forecasting, federated and differentially private learning and multivariate timeseries at the ML4H workshop at today. Code also at

    Poništi
  3. 10. pro 2019.

    Unexpected NeurIPS moments: main conference poster sessions reaching capacity.

    Poništi
  4. 10. pro 2019.

    Come to the Google booth at NeurIPS today to hear about some of our recent research and open-sourcing on medical records!

    Poništi
  5. 16. lis 2019.

    Come see our work on medical record notes modeling with Jonas Kemp and also on learning adaptive learning rates with Zhen Xu and at Baylearn today!

    Poništi
  6. 11. ruj 2019.

    Come work with us! Google is now accepting applications for the 2020 AI Residency Program, Healthcare! Head to for more details about the program. Applications close on Sept 17th, 2019! Questions? Go to

    Poništi
  7. proslijedio/la je Tweet
    17. lip 2019.

    I'm very excited about our work on the use of machine learning and technology for healthcare at , & that we've hired David Feinberg () to lead our efforts in this space. Learn more about David and what we're doing in this interview.

    Poništi
  8. 15. lip 2019.

    See our work on learning graphical structure in medical records: Learning Graphical Structure of Electronic Health Records with Transformer for Predictive Healthcare at the Learning and Reasoning with Graph-Structured Representations workshop 3:30-4:30pm.

    Poništi
  9. 14. lip 2019.

    Check out our work on medical records at the ICML workshops today! Analyzing the Role of Model Uncertainty for Electronic Health Records at the Uncertainty workshop and Time series modelling by restricting feature interaction at the Time series workshop.

    Poništi
  10. 11. lip 2019.

    Come to the Google ICML booth to hear about some of our recent medical records research.

    Poništi
  11. 9. svi 2019.

    Come see Anna Huang give a talk on our music generation with transformer work at the poster session this morning!

    Poništi
  12. 8. svi 2019.

    Almost as many hands went up at for 'not everything can be learnt' vs. 'everything can be learnt'. Surprising for this audience!

    Poništi
  13. proslijedio/la je Tweet
    7. svi 2019.

    TPU Pods are the hardware we use at for much of our research and production ML models for things like BERT, large-scale image classification, etc. They are now in beta on . Now you can get your own AI supercomputer by the hour!

    Poništi
  14. proslijedio/la je Tweet
    7. svi 2019.

    At 4:15 at the Google booth, will talk about some of Google's health-related research efforts, including fairness in for health equity, scalable and accurate with electronic health records and more. We hope you'll join us!

    Poništi
  15. 20. velj 2019.

    In fact we showed that you don't even need additional data to get a gain from pretraining.

    Poništi
  16. proslijedio/la je Tweet
    18. velj 2019.

    Since our work on "Semi-supervised sequence learning", ELMo, BERT and others have shown changes in the algorithm give big accuracy gains. But now given these nice results with a vanilla language model, it's possible that a big factor for gains can come from scale. Exciting!

    Poništi
  17. proslijedio/la je Tweet
    19. velj 2019.
    Odgovor korisniku/ci

    Nice. But the idea of using pretrained language models AFAIK was first proposed by this paper (Also mentioned in Jacob's slides for the history around BERT: )

    Poništi
  18. proslijedio/la je Tweet
    4. pro 2018.

    Great advice from Olivier Bousquet about being bold and creative in research. From test of time award talk at

    Poništi
  19. 12. lis 2018.

    If only we had transformer models back then to do bigger LM pretraining!

    Poništi
  20. 22. kol 2018.

    The product of our Brain/Calico collaboration is now out!

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·