Loreto Parisi

@loretoparisi

Computer Engineer MSc. Technical Director of Machine Learning Father of an amazing girl and a cute boy.

Bologna, Italy
Vrijeme pridruživanja: kolovoz 2007.

Tweetovi

Blokirali ste korisnika/cu @loretoparisi

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @loretoparisi

  1. proslijedio/la je Tweet
    prije 19 sati

    New models from: - (Dutch BERT), - at Facebook AI (MMBT, multi-modal model) - , et al. (FlauBERT, French-trained XLM-like) - , et al. at (UmBERTo, Italian CamemBERT-like)

    , , i još njih 3
    Prikaži ovu nit
    Poništi
  2. 2. velj

    “Compared to an existing state-of-the-art generative model, OpenAI , has 1.7x greater model capacity and was trained on 8.5x more data.” The way to generative text models seems to be more data and more capacity. But are we sure it’s true?

    Poništi
  3. 1. velj
    Poništi
  4. proslijedio/la je Tweet
    1. velj
    Odgovor korisnicima
    Poništi
  5. 31. sij

    Kudos to for his work on text generation with LaserTagger. 🚀

    Poništi
  6. proslijedio/la je Tweet
    31. sij

    I already find their RoBERTo and BERT model useful for English. They now release for other languages. 👏 - UmBERTo, a Roberta-based Language Model trained on large Italian Corpora. - FlauBERT, A new French model was added. - Dutch BERT model.

    Poništi
  7. 30. sij

    Fast, parallel (browser) applications with SIMD

    Poništi
  8. 29. sij

    Let’s support core-js, a modular standard library for and that includes polyfills for . Made by

    Poništi
  9. 29. sij
    Poništi
  10. proslijedio/la je Tweet
    24. sij

    is now using adjacent sentences as context! I mentioned on the TAUS call that I had noticed DeepL seems to be doing it. Just now showed me an example that confirms it. Beautiful to see this is now reality.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    28. sij

    I'm happy to announce our latest work on self-supervised learning for . PASE+ is based on a multi-task approach useful for recognition. It will be presented at . paper: code: @Mila

    Prikaži ovu nit
    Poništi
  12. 27. sij

    The heroines Grace Hopper (1906-1992) was a trailblazing computer programmer who became the first person to call a programming problem named after she found a dead moth 🐜 in a malfunctioning computer

    Poništi
  13. 25. sij
    Poništi
  14. proslijedio/la je Tweet

    So, a blog post. Not connected to Marian at all, surprisingly, but about a MT quality rabbit hole I went down the last couple of days. "Is MT really lexically less diverse than human translation?" If that's useful in any way I might keep on blogging.

    Prikaži ovu nit
    Poništi
  15. 25. sij

    accuracy by theory and in practice it’s not the same thing. This work could fill the gap!

    Poništi
  16. proslijedio/la je Tweet
    24. sij

    We're releasing mBART, a new seq2seq multilingual pretraining system for machine translation across 25 languages. It gives significant improvements for document-level translation and low-resource languages. Read our paper to learn more:

    Poništi
  17. 24. sij

    Wow! audio models in essentia audio library. That’s cool congrats

    Poništi
  18. 24. sij
    Poništi
  19. proslijedio/la je Tweet
    3. pro 2019.

    We've just updated our paper on Stacked Capsule Autoencoders -- an unsupervised model that achieves SOTA classification performance with , and Fig: TSNE plot of capsule activities on MNIST.

    Poništi
  20. proslijedio/la je Tweet
    22. sij

    Looking for a reference or example of using the SentencePieceBPETokenizer in the new tokenizers library. Any suggestions ?

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·