Sebastian Ruder

@seb_ruder

Research scientist • Natural language processing • Transfer learning • Making ML & NLP accessible

Vrijeme pridruživanja: rujan 2014.

Tweetovi

Blokirali ste korisnika/cu @seb_ruder

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @seb_ruder

  1. Prikvačeni tweet
    6. sij

    10 ML & NLP Research Highlights of 2019 New blog post on ten ML and NLP research directions that I found exciting and impactful in 2019.

    Poništi
  2. proslijedio/la je Tweet
    2. velj
    Poništi
  3. 1. velj

    Curriculum for Reinforcement Learning "Learning is probably the best superpower we humans have." explores four types of curricula that have been used to help RL models learn to solve complicated tasks.

    Poništi
  4. proslijedio/la je Tweet
    31. sij

    Given the paucity of annotated data, how can we perform sample-efficient generalization on unseen task-language combinations? Possible solution: a generative model of the neural parameter space, factorized into variables for several languages and tasks. 1/2

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    30. sij

    Humans learn from curriculum since birth. We can learn complicated math problems because we have accumulated enough prior knowledge. This could be true for training a ML/RL model as well. Let see how curriculum can help an RL agent learn:

    Poništi
  6. proslijedio/la je Tweet
    30. sij

    We are happy to welcome back to Heidelberg! He will give a talk on 6th Feb about "Cross-lingual transfer learning"

    Poništi
  7. proslijedio/la je Tweet
    30. sij

    This year I am keen on featuring ML and NLP tools and projects in the NLP Newsletter. They help to inspire other developers and also promote some of the interesting ideas coming from NLP and ML. Reach out if you are working on something interesting and would like to feature it.

    Poništi
  8. proslijedio/la je Tweet
    30. sij

    When you apply a prototype Transmogrifier to language modelling, you get the Mogrifier LSTM and a couple of state-of-the-art results. Joint work with and Phil Blunsom. Code at .

    Poništi
  9. proslijedio/la je Tweet
    29. sij

    Machine Learning Summer School 2020 is in Tuebingen, Germany! Please apply. Deadline: 11 Feb 2020.

    Poništi
  10. proslijedio/la je Tweet
    29. sij

    At we run a "Data and Knowledge Engineering" seminar every Monday. Check out the amazing list of speakers for the Spring semester, with a great mix of internal and external speakers from both industry and academia!

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    27. sij
    Odgovor korisnicima

    We have both a Dutch ULMFiT model and a Dutch BERT-model (BERT-NL), both available on The paper on Dutch ULMFit: (focusing on small training set sizes)

    Poništi
  12. proslijedio/la je Tweet
    27. sij
    Odgovor korisniku/ci
    Poništi
  13. 27. sij
    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    27. sij

    We are excited to host on Februray 6 at ! He will talk about "Cross-lingual Transfer Learning", more info at:

    Poništi
  15. 27. sij

    Transfer learning is increasingly going multilingual with language-specific BERT models: - 🇩🇪 German BERT - 🇫🇷 CamemBERT , FlauBERT - 🇮🇹 AlBERTo - 🇳🇱 RobBERT

    Prikaži ovu nit
    Poništi
  16. 27. sij

    New NLP News: NLP Progress, Restrospectives and look ahead, New NLP courses, Independent research initiatives, Interviews, Lots of resources (via )

    Poništi
  17. 26. sij

    If you want to learn about privacy-preserving machine learning, then there is no better resource than this step-by-step notebook tutorial by . From the basics of private deep learning to building secure ML classifiers using PyTorch & PySyft.

    Poništi
  18. 25. sij

    Emil’s Story as a Self-Taught AI Researcher An interview with with useful tips on structuring a curriculum, creating a portfolio, getting involved in research, and finding a job. 💯

    Poništi
  19. 25. sij

    Is MT really lexically less diverse than human translation? TL;DR: analyses WMT19 system outputs and finds no difference in lexical diversity (LD) between MT and human translations and no correlation between LD and MT quality.

    Poništi
  20. 25. sij

    I also really like the focus on learning, organised around a collection of top resources for each topic:

    Prikaži ovu nit
    Poništi
  21. 25. sij

    For instance, you can follow what I'm currently reading here:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·