Rezultati pretraživanja
  1. 18. pro 2019.

    end to end SR is one hot topic. And it is now getting deployed into edge for dictation, next should be cloud.

  2. 16. pro 2019.

    Interesting keynote talk by Semy Bengio on generalization of deep networks

  3. 12. pro 2019.

    One day before , we are hosting Winter Seminar Series at National University of Singapore. Our first speaker today is the General Co-chair of ASRU 2019, Prof. Eric Fosler-Lussier from Ohio State University, United States! It is 27 degrees (aka winter) here! 🏝

  4. 14. pro 2019.

    Cross-lingual VC with GAN by Barrak Sisman. It’s awesome that I can possibly speak grammatically correct English, but my bad accent still remains. I wonder can we disentangle vocal timbre and accent, and keep the native accent from a source speaker too?

  5. 18. pro 2019.

    glad to see transformer and fast speech for the advancement on state of art . But tts is not solved yet. Keep pushing💪

  6. 19. pro 2019.

    Very happy and honored our paper "MIMO-Speech: End-to-end multi-channel multi-speaker speech recognition" got the best paper award at . This was the 1st outcome of an exciting collaboration with Xuankai Chang + Shinji Watanabe (JHU) & Wangyou Zhang + Yanmin Qian (SJTU).

  7. 13. pro 2019.

    the poster of our work to use BERT on Chinese polyphony disambiguation, an important feature for . Same can be applied to other languages like Japanese

  8. 12. ožu 2019.

    We sincerely thank our Platinum sponsors:- Huawei, ByteDance, Datatang, our Gold sponsor:- Sogou, Inc. and our Silver sponsor:- Tongdun Technology. Welcome aboard

  9. 16. pro 2019.

    Speech-to-speech translation by Andros Tjandra. No transcription needed: Train vqvae with target language (A), map source to the target latent (B), translated speech is decoded from inferred latent (C). Would love to see more similar ideas on music.

  10. Microsoft researchers have created a method that combines audio and video to better identify who is speaking and transcribe speech even when two people are talking at the same time. Learn about new audio-visual meeting transcription advancements:

  11. 16. pro 2019.

    Chao-Wei will present his ASRU paper "ADAPTING PRETRAINED TRANSFORMER TO LATTICES FOR SPOKEN LANGUAGE UNDERSTANDING" at 10:30-12:00 today. If you are attending ASRU, please come to the session.

    Prikaži ovu nit
  12. 17. pro 2019.

    Yu-An will present his paper at 10:30-12:00 today. In his undergrad project, we try to find the model and exploration that are suitable for and generalize to different dialog scenarios. Come to the session and check out the paper:

  13. 9. pro 2019.

    Thanks Larry for writing this. A blog post on our paper on using semi-supervised learning for improving Alexa's Natural Language Understanding . Paper link

  14. 16. pro 2019.

    Our PhD student very busy at ASRU 2019 presenting his research on One-To-Many Multilingual End-to-end Speech Translation

  15. 15. pro 2019.
    Prikaži ovu nit

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.