Tweetovi

Blokirali ste korisnika/cu @mirco_ravanelli

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @mirco_ravanelli

  1. 28. sij

    Interesting tutorial by on Hidden Markov Models using pytorch!

    Poništi
  2. 28. sij

    I'm a big fan of self-supervised deep learning. I think it will be widely adopted in the future in many fields, including speech. Maybe this will contribute to "make pre-training great again"! hahaha

    Prikaži ovu nit
    Poništi
  3. 28. sij

    PASE+ is an improved version of PASE for robust speech recognition. We employ an online speech distortion module, a revised encoder that better learns short- and long-term speech dynamics, and a refined the set of workers that encourage better cooperation.

    Prikaži ovu nit
    Poništi
  4. 28. sij

    The paper is co-authored by Jianyuan Zhong, , Pawel Swietojanski, Joao Monteiro, Jan Trmal, Yoshua Bengio. The work was started at JSALT 2019 (organized by )

    Prikaži ovu nit
    Poništi
  5. 28. sij

    I'm happy to announce our latest work on self-supervised learning for . PASE+ is based on a multi-task approach useful for recognition. It will be presented at . paper: code: @Mila

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    24. sij

    Accepted to ICASSP 2020! Check out our paper if you're building voice interfaces and want to avoid obtaining lots of data.

    Poništi
  7. 24. pro 2019.

    I was quite disappointed by the superficial understanding of deep learning by . He mainly thinks that deep learning is just an MLP and that search is intelligence (search is brute force, not intelligence). Thanks for organizing this amazing debate!

    Prikaži ovu nit
    Poništi
  8. 24. pro 2019.

    Yesterday I saw the . My feeling is that Yoshua is looking forward, trying to extend with higher-level capabilities (e.g, causality), while is looking backward trying to combine it with the same techniques that didn't work so well in the past!

    Prikaži ovu nit
    Poništi
  9. 13. pro 2019.
    Prikaži ovu nit
    Poništi
  10. 13. pro 2019.

    Interesting work by on a Deep Complex Extractor. It will be presented at the Deep Inverse Problems workshop of . The approach can be applied to different tasks, including speech separation.

    Prikaži ovu nit
    Poništi
  11. 22. lis 2019.

    This might sound weird today, but this approach is extremely flexible and some works are already going in this direction. What do you think?

    Prikaži ovu nit
    Poništi
  12. 22. lis 2019.

    The training process of future end-to-end ASR or SLU systems could be based on a different pipeline: instead of using data to train the targeted model directly, we will train a text-to-speech first and then we infer the final model with in-domain data drawn from it.

    Prikaži ovu nit
    Poništi
  13. 22. lis 2019.

    Let me share this work by on end-to-end spoken language understanding with a text-to-speech system in the loop. It's a simple idea, but it works well and can drastically reduce the cost of building a new system.

    Prikaži ovu nit
    Poništi
  14. 4. lis 2019.

    I just did a short video on Local Info Max (LIM), a technique that I recently presented at . LIM learns speaker embeddings with mutual information in a self-supervised way.

    Poništi
  15. 1. lis 2019.

    We are reviewing the tons of applications we received for working at on the project. If you are a PhD student doing research on speech and you are interested in this opportunity, these are the last days to send us your CV!

    Poništi
  16. 30. ruj 2019.

    I'm very happy to co-organize a special session at @ICASSP2020 on "End-to-End Approaches for Spoken Language Understanding". Please, share with interested people! For more info check out our website:

    Poništi
  17. proslijedio/la je Tweet
    27. ruj 2019.

    Excited to announce a special session at ICASSP 2020 on End to End Spoken Language Understanding. All researchers in the community are invited to submit their research papers:

    Poništi
  18. proslijedio/la je Tweet

    NVIDIA is partnering on the project with for accelerated development of and applications, providing flexibility through the Neural Modules toolkit. Learn how you can join and contribute:

    Poništi
  19. proslijedio/la je Tweet

    Presenting at - new Jasper ASR model and Neural Modules toolkit to accelerate development of and speech applications. Try today:

    Poništi
  20. 10. ruj 2019.

    A great project needs great sponsors. I would like to thank our sponsors and collaborators! We are still actively looking for new sponsorships in order to achieve our goals faster and better. Feel free to contact me!

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·