Tweetovi

Blokirali ste korisnika/cu @Boristream

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Boristream

  1. proslijedio/la je Tweet
    31. sij

    On Tuesday, in my class, we have learnt that all a neural net does is stretching / contracting the space fabric. For example this 3-layer net (1 hidden layer of 100 positive neurons) gets its 5D logits (2D projections) linearly separable by the classifier hyperplanes (lines).

    Poništi
  2. proslijedio/la je Tweet
    1. velj

    I may have found a new reason for liking TabNet, that’s similar to fastai’s principles

    Poništi
  3. proslijedio/la je Tweet
    31. sij

    New Python package for causal discovery, estimation and more: Makes me wonder, how long will it take for Python to become the de facto programming language for causal data science? I think we’re getting there 😉

    Poništi
  4. proslijedio/la je Tweet
    30. sij

    Our FlauBERT is now natively supported by 's transformers library. Many thanks to , and the Hugging Face team for the active technical support! Paper (new version will be available soon): Code:

    Poništi
  5. proslijedio/la je Tweet
    30. sij

    40x faster predictions for even the deepest random forests with FIL’s new sparse forest support -

    Poništi
  6. proslijedio/la je Tweet
    30. sij

    One of the best decisions we ever made Applied Deep Learning Research was to standardize on for all our research. It has made us more productive and made our work more fun. Glad to see agrees!

    Poništi
  7. proslijedio/la je Tweet
    28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    19. pro 2019.

    新たな因果推論本情報が! “Causal Reasoning: Fundamentals and Machine Learning Applications“ 機械学習との関わりが意識されてる予感がするのでとても楽しみ。現在執筆中でChapter 1のみ公開されているようです。

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    27. sij

    I've written a new blog post on the choice of feature amounts in convolutional neural networks. It's the start of a series aimed to challenge common DL assumptions, so I hope that it's both sufficiently insightful and still provocative:

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    26. sij

    Found this cool library that can reverse engineer spacy matcher patterns from data. Will report back after trying it for a few use cases.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    25. sij

    When I learn new idea, I must repeatedly learn it from multiple before it *clicks* If you're learning nets - add this to your list It discusses for: - images - audio - databases

    Poništi
  12. proslijedio/la je Tweet
    25. sij

    After a dismal performance in the RSNA Intracranial Haemorrhage competition I pulled apart the 2nd place solution to see what they had done so right. Post 3 of 4 now up with jupyter notebooks :D Blog series: Notebooks:

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    9. pro 2019.

    Are you a computational biologist trying to embed some proteins? Jealous of NLP researchers for 's easy-to-use repository of models? Come check out our re-release of our TAPE code (), complete with a huggingface-style API for loading models!

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    25. sij

    [1/7] Super excited to present our recent work -- mBART. We demonstrate multilingual denoising pre-training produces significant gains across a variety of machine translation tasks! Joint work with

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    24. sij

    Accepted to ICASSP 2020! Check out our paper if you're building voice interfaces and want to avoid obtaining lots of data.

    Poništi
  16. proslijedio/la je Tweet
    23. sij

    Backpropagation and labeled data are the bread and butter of deep learning. But recent research from the University of Amsterdam suggests neither is necessary to train effective neural networks to represent complex data:

    Poništi
  17. proslijedio/la je Tweet
    22. sij

    Code is up: And being my usual distracted self, I forgot one co-author from the list: (Sorry Alex!) The code for ImageNet will come later.

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    21. sij

    OPUS-MT (): over 1,000 pre-trained translation models and a dockerized translation server based on

    Poništi
  20. proslijedio/la je Tweet
    21. sij
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·