Tweetovi

Blokirali ste korisnika/cu @_stefan_munich

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @_stefan_munich

  1. Prikvačeni tweet
    15. velj 2019.
    Odgovor korisniku/ci

    My lab (actually its only me + a self build workstation) trained a large-scale multi-lingual language model (157 languages) that achieves SOTA on nearly all language understanding tasks (except one). Unfortunately, I can't release the full model due to responsible disclosure...

    Poništi
  2. prije 15 sati

    Training on TPU with fairseq - coming soon 😍 Can't wait to train own RoBERTa models and using them with Transformers library 🤗 🗒️ See this GitHub issue:

    Poništi
  3. proslijedio/la je Tweet
    prije 15 sati

    ▓░░░░░░░░░░░░░░ 9%

    Poništi
  4. prije 19 sati

    Thanks for your positive reactions 🤗 I'll try to answer all questions now 😅 The repo for Turkish BERT model can be found here:

    Prikaži ovu nit
    Poništi
  5. 31. sij

    Turkish-: Anyone interested in a Turkish BERT and wants to evaluate it on downstream tasks? I did evaluation only for UD PoS tagging - any help is really appreciated! Would really like to have a proper evaluation before adding it to the Transformers hub🤗

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    30. sij

    ▓░░░░░░░░░░░░░░ 8%

    Poništi
  7. 23. sij
    Poništi
  8. proslijedio/la je Tweet
    23. sij

    ▓░░░░░░░░░░░░░░ 6%

    Poništi
  9. proslijedio/la je Tweet
    22. sij

    And the winner of the challenge 2019 is 🥁 Marcus Bitzl. Congratulations! 🏆

    Poništi
  10. 20. sij

    Training corpus was OPUS + OSCAR with a total size of 81GB. More details:

    Prikaži ovu nit
    Poništi
  11. 20. sij

    Italian update: We release our cased and uncased XXL BERT models for Italian 🎉 Model can be used with the awesome 🤗/ Transformers library from model hub:

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    19. sij

    ▓░░░░░░░░░░░░░░ 5%

    Poništi
  13. proslijedio/la je Tweet
    15. sij

    ▓░░░░░░░░░░░░░░ 4%

    Poništi
  14. proslijedio/la je Tweet
    15. sij

    (please retweet, thanks)

    Poništi
  15. proslijedio/la je Tweet
    12. sij

    ░░░░░░░░░░░░░░░ 3%

    Poništi
  16. proslijedio/la je Tweet
    11. sij

    So why do programmers prefer dark mode? 🤭 Because light attracts bugs. 🚪🏃🏻‍♀️

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    10. sij

    Now that neural nets have fast implementations, a bottleneck in pipelines is tokenization: strings➡️model inputs. Welcome 🤗Tokenizers: ultra-fast & versatile tokenization led by : -encode 1GB in 20sec -BPE/byte-level-BPE/WordPiece/SentencePiece... -python/js/rust...

    Prikaži ovu nit
    Poništi
  18. 10. sij

    Waiting for it 😍 I would like to integrate it into 🤗/ Transformers! ☺️

    Poništi
  19. proslijedio/la je Tweet
    5. sij

    America, please don’t re-elect this ignorant, dangerous man.

    Poništi
  20. 31. pro 2019.

    Defective pixel 🤔

    Poništi
  21. 30. pro 2019.
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·