David Macedo

@david_macedo

Deep Learning Professor and Researcher.

Recife - PE
Vrijeme pridruživanja: lipanj 2009.

Tweetovi

Blokirali ste korisnika/cu @david_macedo

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @david_macedo

  1. proslijedio/la je Tweet

    We’ve introduced the first large-scale data set and benchmark to help make conversation models more empathetic. Watch the video Research in Brief to learn more. via Read the full paper:

    Poništi
  2. proslijedio/la je Tweet
    prije 20 sati

    I wrote "How to solve 90% of NLP problems: a step-by-step guide" after seeing dozens of applied NLP projects at . It has been read by over three hundred thousand people! It presents a cookie cutter NLP approach, along with reference code

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    prije 19 sati
    Poništi
  4. proslijedio/la je Tweet
    3. velj
    Odgovor korisniku/ci
    Poništi
  5. proslijedio/la je Tweet
    3. velj

    Given that data loading can be a major bottleneck in many DL projects, this sounds like an interesting project to check out: "Accelerating Pytorch with Nvidia DALI" --> "on small models it's ~4X faster than the Pytorch dataloader"

    Poništi
  6. proslijedio/la je Tweet
    3. velj
    Odgovor korisnicima

    IIRC Fukushima's Neocognitron also used the max(0, x) function in its design (to mimic firing frequencies in biological neurons, which must be positive).

    Poništi
  7. proslijedio/la je Tweet
    3. velj
    Poništi
  8. proslijedio/la je Tweet
    2. velj

    The 1986 classic 'Parallel Distributed Processing' uses the term 'threshold function' instead of 'rectified linear unit'. I prefer the 1986 version :)

    Poništi
  9. proslijedio/la je Tweet
    3. velj
    Odgovor korisnicima

    This intro video about it from 1986 is just so retro and cool:

    Poništi
  10. proslijedio/la je Tweet
    3. velj

    This repo is full of amazing awesomeness. I don't know of anything else like it. Independent refactored carefully tested implementations of modern CNNs

    Poništi
  11. proslijedio/la je Tweet
    3. velj

    Added ImageNet validation results for 164 pretrained models on several datasets, incl ImageNet-A, ImageNetV2, and Imagenet-Sketch. No surprise, models with exposure to more data do quite well. Without extra, EfficientNets are holding their own.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    2. velj

    Our forthcoming book dives deep into different tabular modeling approaches, with many experiments. But I'll save you from reading the whole thing, and just show you the conclusion.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    1. velj

    A welcome surprise when our tabular models wind up beating baselines. GBT: 1.44, TabNet: 0.14, fastai: 0.034.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    1. velj

    Wow, what a paper! Super Convergence is a super interesting phenomenon. Thanks ⁦⁩ for the recommendation. [1708.07120] Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates

    Poništi
  15. proslijedio/la je Tweet
    30. sij

    We're standardizing OpenAI's deep learning framework on PyTorch to increase our research productivity at scale on GPUs (and have just released a PyTorch version of Spinning Up in Deep RL):

    Poništi
  16. proslijedio/la je Tweet
    30. sij

    One of the best decisions we ever made Applied Deep Learning Research was to standardize on for all our research. It has made us more productive and made our work more fun. Glad to see agrees!

    Poništi
  17. proslijedio/la je Tweet
    30. sij

    Humans learn from curriculum since birth. We can learn complicated math problems because we have accumulated enough prior knowledge. This could be true for training a ML/RL model as well. Let see how curriculum can help an RL agent learn:

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    30. sij

    Pandas 1.0 is here! * Read the release notes: * Read the blogpost reflecting on what 1.0 means to our project: * Install with conda / PyPI: Thanks to our 300+ contributors to this release.

    Poništi
  19. proslijedio/la je Tweet
    31. sij

    Transformers 2.4.0 is out 🤗 - Training transformers from scratch is now supported - New models, including *FlauBERT*, Dutch BERT, *UmBERTo* - Revamped documentation - First multi-modal model, MMBT from , text & images Bye bye Python 2 🙃

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    31. sij

    Today we announce a novel, open-source method for text generation tasks (e.g., summarization or sentence fusion), which uses edit operations instead of generating text from scratch, leading to less errors and faster model execution. Read about it below.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·