Tweetovi

Blokirali ste korisnika/cu @AT_Amir

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @AT_Amir

  1. proslijedio/la je Tweet
    24. sij

    Do you have data for direct speech translation and want to update your model on the fly? Check out our paper "INSTANCE-BASED MODEL ADAPTATION FOR DIRECT SPEECH TRANSLATION" accepted to

    Poništi
  2. 21. sij

    The efficient Transformer aka "Reformer" can handle 1M words on a single GPU(16GB), useful for research groups with limited computational resources.

    Poništi
  3. proslijedio/la je Tweet
    9. sij

    ⚡️TensorFlow 2.1.0 is out NOW!⚡️ This will be the last TF release supporting Python 2. Please see the full release notes for details on added features and changes! Learn more here ↓

    Poništi
  4. proslijedio/la je Tweet
    8. sij

    Thrilled to announce the 2020 "offline speech translation" challenge. Cascaded and end-to-end models will be evaluated on the translation of TED talks from English into German. Info:

    Poništi
  5. proslijedio/la je Tweet
    16. pro 2019.
    Poništi
  6. 28. stu 2019.

    T5 by google explores the field of transfer learning in NLP. Very good systematic study on how to pretrain and transfer transformer models for downstream tasks: cc

    Poništi
  7. proslijedio/la je Tweet
    26. stu 2019.

    Last day for submitting your Expression of Interest! MT Journal Special Issue on Machine Translation for Low-Resource Languages

    Poništi
  8. proslijedio/la je Tweet

    Goal: +train an efficient universal model that can translate b/n any language Progress: +👇 work sets a new milestone towards building a single model Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges 👇

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    14. stu 2019.

    Our PhD student presenting her study on resources for subtitling-oriented neural machine translation "Are Subtitling Corpora really Subtitle-like?"

    Poništi
  10. proslijedio/la je Tweet
    11. stu 2019.

    The Noisy Channel has always been one of my favourite ways of thinking about (human and machine) translation, because of its elegance and interpretatbility. Here a promising approach for Noisy Channel-based :

    Poništi
  11. proslijedio/la je Tweet
    25. lis 2019.

    "Robust Neural Machine Translation for Clean and Noisy Speech Transcripts", done during my internship has been accepted to IWSLT and available @ . Is fine-tuning MT systems on ASR output sufficient for speech translation?

    Poništi
  12. proslijedio/la je Tweet
    24. lis 2019.

    Proud and happy to present our work on MT for machine at Jrc Ispra @eusciencehub

    Poništi
  13. proslijedio/la je Tweet
    24. lis 2019.

    "Instance-Based Model Adaptation For Direct Speech Translation" proposing a framework based on audio similarity that can retrieve segments pairs similar to the one to translate. Submitted to ICASSP, pre-published on .

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    23. lis 2019.

    I'm happy to announce that our paper "One-to-Many Multilingual End-to-End Speech Translation", accepted at is now available on and . Get up to 2.5 BLEU improvement on MuST-C En-Pt.

    Poništi
  15. proslijedio/la je Tweet
    23. lis 2019.

    Are you working on for ? Then this CFP might be for you. Machine Translation Journal Special Issue on Machine Translation for Low-Resource Languages Send us your EOI by Nov. 26th!

    Poništi
  16. 18. lis 2019.

    Our paper "Machine Translation for Machines: the Sentiment Classification Use Case" accepted at is now available on arxiv:

    Poništi
  17. proslijedio/la je Tweet
    13. lis 2019.

    Congratulations , , and team on winning Best Poster at the CNI workshop!!

    Poništi
  18. proslijedio/la je Tweet
    1. lis 2019.

    My piece to celebrate International Translation Day! (Re-)discovering Translationese in Machine Translation Research

    Poništi
  19. proslijedio/la je Tweet
    26. ruj 2019.

    Want to train Transformer for speech but it doesn't fit in gpu? by and shows that you can stochastically skip some layers and save huge computation and memory resources while improving results, but huge models are needed.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·