nicolas michel

@_nicolasmichel

gpg key id: 0xCCF002C5A7C4F1A6 X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*

Berlin
Vrijeme pridruživanja: veljača 2014.

Tweetovi

Blokirali ste korisnika/cu @_nicolasmichel

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @_nicolasmichel

  1. proslijedio/la je Tweet
    23. sij

    Happy to see SOTA benchmarks moving from CIFAR-10, to CIFAR-10 with only 40 training labels!

    Poništi
  2. proslijedio/la je Tweet
    17. lis 2019.

    Great news! Our paper "Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening" has been published at ! This marks the end of a 3-year long endeavor, at and , to demonstrate the potential of CNNs on this task. (1/8)

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    20. pro 2019.

    Yes! I got my first big conference paper accepted at ICLR, with spotlight! We improve the previous DeepMind paper "NALU" by 3x-20x. – This took 7-8 months, working without any funding as an independent researcher. Paper: Code:

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet

    Tensorflow.js available on WebAssembly backend, . From 2641ms with JS to 239ms with Wasm! Source code of tfjs-backend-wasm, . With the Face Detector model, Wasm is comparable to WebGL in term of perf.

    Poništi
  5. proslijedio/la je Tweet
    17. pro 2019.

    Hello Twitter, connaîtrais-tu un étudiant sur pouvant assurer un soutien en spé "Informatique et Sciences du Numérique" (niveau 1ère) pour une jeune fille habitant qui ne peut suivre normalement sa scolarité ? Supports fournis. Merci pour les RT... DM ouverts.

    Poništi
  6. proslijedio/la je Tweet
    14. pro 2019.

    Slides for my 2nd NeurIPS talk. "Materials Matter: How biologically inspired alternatives to conventional neural networks improve meta-learning and continual learning" Covers Differentiable Plasticity & ANMML

    Poništi
  7. proslijedio/la je Tweet
    10. stu 2019.
    Poništi
  8. proslijedio/la je Tweet
    9. stu 2019.

    XLM-R: Amazing results on XLU and GLUE benchmarks from Facebook AI: large transformer network trained on 2.5TB of text from 100 languages.

    Poništi
  9. proslijedio/la je Tweet

    I've just released a fairly lengthy paper on defining & measuring intelligence, as well as a new AI evaluation dataset, the "Abstraction and Reasoning Corpus". I've been working on this for the past 2 years, on & off. Paper: ARC:

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    17. lis 2019.

    Hey ! Slides for tomorrow's talk can be found here:

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    2. lis 2019.

    A curated list of awesome Applied Category Theory resources

    Poništi
  12. proslijedio/la je Tweet
    30. ruj 2019.

    TensorFlow 2.0 has been released! Congrats everyone, and thanks for making this possible. Release notes:

    Poništi
  13. proslijedio/la je Tweet
    12. ruj 2019.

    I recommend this paper with theoretical and algorithmic insights on metalearning to researchers interested in hierarchical Bayes, MAML, and Reptile. It addresses the idea of learning reusable fixed and adaptive modules across many tasks. ⁦⁦

    Poništi
  14. proslijedio/la je Tweet
    12. kol 2019.

    I am thrilled to announce that the Scala edition of "Category Theory for Programmers" by Bartosz Milewski is now available as a paperback! Huge thanks to the many contributors who made this possible! Buy it here: PDF:

    The Scala edition of Bartosz Milewski's "Category Theory for Programmers", now available to purchase in paperback.
    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    23. srp 2019.
    Poništi
  16. proslijedio/la je Tweet
    12. srp 2019.

    "Large Memory Layers with Product Keys" with , , Marc'Aurelio Ranzato and TL;DR We introduce a large key-value memory layer with millions of values for a negligible computational cost. 1/2

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    10. srp 2019.

    We released our code for adaptive-span! It can train a Transformer with a context size of 8k tokens

    Poništi
  18. proslijedio/la je Tweet
    4. srp 2019.
    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    19. lip 2019.

    XLNet: a new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE) arxiv: github (code + pretrained models): with Zhilin Yang, , Yiming Yang, Jaime Carbonell,

    Poništi
  20. proslijedio/la je Tweet
    7. lip 2019.

    Excited to release "Finding Friend and Foe in Multi-Agent Games". Our agent learns to play The Resistance: Avalon and figure out who is on its team. It plays in ad-hoc teams of humans and agents and outperforms humans as both a cooperator and competitor

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·