-
#ASRU2019 end to end SR is one hot topic. And it is now getting deployed into edge for dictation, next should be cloud. pic.twitter.com/eFwYRtcqSP
-
Interesting keynote talk by Semy Bengio on generalization of deep networks
#asru2019@asru2019pic.twitter.com/Q5vw8sMPVW
-
One day before
#asru2019, we are hosting Winter Seminar Series at National University of Singapore. Our first speaker today is the General Co-chair of ASRU 2019, Prof. Eric Fosler-Lussier from Ohio State University, United States! It is 27 degrees (aka winter) here!
@asru2019pic.twitter.com/cqL4900yQD
-
Cross-lingual VC with GAN by Barrak Sisman. It’s awesome that I can possibly speak grammatically correct English, but my bad accent still remains. I wonder can we disentangle vocal timbre and accent, and keep the native accent from a source speaker too?
#ASRU2019 pic.twitter.com/dzJaICnxM3
-
#ASRU2019 glad to see transformer and fast speech for the advancement on state of art#NeuralTTS. But tts is not solved yet. Keep pushing
pic.twitter.com/FZYD57y1MF
-
Very happy and honored our paper "MIMO-Speech: End-to-end multi-channel multi-speaker speech recognition" got the best paper award at
#ASRU2019. This was the 1st outcome of an exciting collaboration with Xuankai Chang + Shinji Watanabe (JHU) & Wangyou Zhang + Yanmin Qian (SJTU). https://twitter.com/merl_news/status/1207594606352834566 …
-
#ASRU2019 the poster of our work to use BERT on Chinese polyphony disambiguation, an important feature for#NeuralTTS. Same can be applied to other languages like Japanesepic.twitter.com/lgBgRQ9vkM
-
We sincerely thank our Platinum sponsors:- Huawei, ByteDance, Datatang, our Gold sponsor:- Sogou, Inc. and our Silver sponsor:- Tongdun Technology. Welcome aboard
#asru2019 -
Speech-to-speech translation by Andros Tjandra. No transcription needed: Train vqvae with target language (A), map source to the target latent (B), translated speech is decoded from inferred latent (C).
#asru2019 Would love to see more similar ideas on music. pic.twitter.com/LitHKz01kA
-
Microsoft researchers have created a method that combines audio and video to better identify who is speaking and transcribe speech even when two people are talking at the same time. Learn about new audio-visual meeting transcription advancements: https://aka.ms/AA6twf4
#ASRU2019 -
Chao-Wei will present his ASRU paper "ADAPTING PRETRAINED TRANSFORMER TO LATTICES FOR SPOKEN LANGUAGE UNDERSTANDING" at 10:30-12:00 today. If you are attending ASRU, please come to the session.
#ASRU2019#Singapore#MiuLabPrikaži ovu nit -
Yu-An will present his paper at 10:30-12:00 today. In his undergrad project, we try to find the model and exploration that are suitable for and generalize to different dialog scenarios. Come to the session and check out the paper: https://reurl.cc/M7gppk
#Singapore#ASRU2019 pic.twitter.com/hbzv23N2Ro
-
Thanks Larry for writing this. A blog post on our
#asru2019 paper on using semi-supervised learning for improving Alexa's Natural Language Understanding . Paper link https://arxiv.org/abs/1910.04196 https://twitter.com/AmazonSciWriter/status/1204042738477518851 … -
Our PhD student
@mdigangiPA very busy at ASRU 2019 presenting his research on One-To-Many Multilingual End-to-end Speech Translation https://arxiv.org/pdf/1910.03320.pdf …@negri_teo@Turchi_Marco@asru2019#nlproc#machinetranslation#speechtranslation#asru2019#ieeeasru2019pic.twitter.com/DMjQ8O7f4E
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.