Glad I'm not the only one! @NvidiaAI's cuDNN really made the RNN feasible in many situations. It's also a competitive advantage for them over other accelerators - as seen in the thread it's very hard to match similar performance. Sadly the blackbox nature bottlenecks innovation.
-
-
-
A bit of good news though -
@yaringal-style dropout / DropConnect is one of the few things still possible with a blackbox LSTM implementation! You can apply dropout to the RNN recurrent weights themselves and then run a batch with the blackbox LSTM =] See https://arxiv.org/abs/1708.02182 pic.twitter.com/z8hvdUuEFp
- Još 2 druga odgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
We’ve also had a difficult time with CuDNN as soon as you want to go beyond a basic LSTM and grab cell states, do