Anthony MOI

@moi_anthony

Tech @ Hugging Face 🤗 - Loves NLP - Previously , (acquired by Stripe)

Vrijeme pridruživanja: siječanj 2012.

Tweetovi

Blokirali ste korisnika/cu @moi_anthony

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @moi_anthony

  1. proslijedio/la je Tweet
    prije 15 sati

    Guest blog post from angel investor on sparse tensors in neural nets. I am VERY excited by what's going to happen in sparsity-land this year 🔥

    Poništi
  2. proslijedio/la je Tweet
    14. sij

    This is a game changer to me. Every NLP researcher or practitioner understands the pain to tokenize and process texts differently for different modern NLP architectures. Huggingface now offers a streamlined process to integrate tokenizers and processors with their amazing models!

    Poništi
  3. proslijedio/la je Tweet
    13. sij

    🔥 Introducing Tokenizers: ultra-fast, extensible tokenization for state-of-the-art NLP 🔥 ➡️

    , , i još njih 3
    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    10. sij

    Now that neural nets have fast implementations, a bottleneck in pipelines is tokenization: strings➡️model inputs. Welcome 🤗Tokenizers: ultra-fast & versatile tokenization led by : -encode 1GB in 20sec -BPE/byte-level-BPE/WordPiece/SentencePiece... -python/js/rust...

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    3. sij
    Poništi
  6. proslijedio/la je Tweet
    26. pro 2019.

    working on a karaoke program that replaces lyrics with matching lines from any source text here's journey + the x-files

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    27. pro 2019.

    If you're using Transformers from source, we've rolled out 2 nice beta features (TBR in January) 💥Ultra-fast Bert/GPT2 tokenizers (up to 80x faster) 🦄Easy/versatile sequence generation for generative models: top-k/nucleus/temperature sampling, penalized/greedy, beam search...

    Poništi
  8. proslijedio/la je Tweet
    20. pro 2019.

    We spend our time finetuning models on tasks like text classif, NER or question answering. Yet 🤗Transformers had no simple way to let users try these fine-tuned models. Release 2.3.0 brings Pipelines: thin wrappers around tokenizer + model to ingest/output human-readable data.

    , , i još njih 4
    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    18. pro 2019.

    🔥🔥 Series A!! 🔥🔥 Solving Natural language is going to be the biggest achievement of our lifetime, and is the best proxy for Artificial intelligence. Not one company, even the Tech Titans, will be able to do it by itself – the only way we'll achieve this is working together

    , , i još njih 6
    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet

    Hugging Face raises $15 million to build the definitive natural language processing library by

    Poništi
  11. proslijedio/la je Tweet
    16. pro 2019.

    🔥 New in v2.2.2: you can now upload and share your models with the community directly from the library, using our CLI 🔥 1. Join here: 2. Use the CLI to upload: 3. Model is accessible to anyone using the `username/model_name` id🎉

    , , i još njih 4
    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    12. pro 2019.

    おはようござえます、日本の友達 Hello, Friends from Japan 🇯🇵! Thanks to , we now have a state-of-the-art Japanese language model in Transformers, `bert-base-japanese`. Can you guess what the model outputs in the masked LM task below?

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    11. pro 2019.
    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    11. pro 2019.

    Distillation ✕ Quantization = 🚀 We're releasing 2 quantized versions of DistilBERT finetuned on SQuAD using Lite, resulting in model sizes of 131MB and 64MB. It's respectively 2x and 4x less than the non-quantized version! 🗜️🗜️🗜️🗜️ [1/4]

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet

    (abstractive) summarization of your documents has never been easier than with 's transformers: [credits go to Yang Liu and Mirella Lapata: and paper linked]

    Poništi
  16. 9. pro 2019.

    I heard some of you found the tokenizers too slow. I think you are going to love what we are cooking for you

    Poništi
  17. proslijedio/la je Tweet
    5. pro 2019.

    How do you say "faster!" in 104 languages? Ask 🤗Transformers! Please welcome **Distil-mBERT**, 104 languages, 92% of mBERT’s performance on XNLI, 25% smaller, and twice as fast.🔥 > Come talk to me about it at ! 🇨🇦🤗

    Poništi
  18. proslijedio/la je Tweet

    Encoder 🦄🤝🦄 decoders are now part of the 🤗 transformers library! I wrote a tutorial to explain how we got there and how to use them 👉 Bonus: a sneak peak into upcoming features ✨

    Poništi
  19. proslijedio/la je Tweet
    26. stu 2019.

    Transformers v2.2 is out, with *4* new models and seq2seq capabilities! ALBERT is released alongside CamemBERT, implemented by the authors, DistilRoBERTa (twice as fast as RoBERTa-base!) and GPT-2 XL! Encoder-decoder with ⭐Model2Model⭐ Available on

    Poništi
  20. proslijedio/la je Tweet

    Tired: Machines playing video games & Go Wired: Surrealist Transformers (playing exquisite corpse)

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·