Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
Blokirali ste korisnika/cu @nhatsmrt
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @nhatsmrt
-
Pham Hoang Nhat proslijedio/la je Tweet
In January,
@anishathalye,@jjgort, and I ran a short class at@MIT_CSAIL on topics we think are missing in most CS programs — tools we use every day that everyone should know, like bash, git, vim, and tmux. And now the lecture notes and videos are online! https://missing.csail.mit.edu/ pic.twitter.com/xNSlLgJfd4Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
This repo is full of amazing awesomeness. I don't know of anything else like it. Independent refactored carefully tested implementations of modern CNNshttps://twitter.com/wightmanr/status/1224178577593241602 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
Note that this is *not* just about time series and trends. It's about the much more subtle issue of "domain shift". How do you know if you have domain shift? Here's a great method, from our forthcoming book (https://www.amazon.com/Deep-Learning-Coders-fastai-PyTorch/dp/1492045527 …): https://twitter.com/jeremyphoward/status/1223243434426650624 …pic.twitter.com/hTIK1kFN1H
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
New blog post: Contrastive Self-Supervised Learning. Contrastive methods learn representations by encoding what makes two things similar or different. I find them very promising and go over some recent works such as DIM, CPC, AMDIM, CMC, MoCo etc.https://ankeshanand.com/blog/2020/01/26/contrative-self-supervised-learning.html …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
Five out of the six top submissions in the M4
#forecasting competition used, in one way or the other, the winner of the M3 competition: the Theta method (or one of its extensions, such as OTM, DOTM or Hybrid Theta). https://www.sciencedirect.com/journal/international-journal-of-forecasting/vol/36/issue/1 …@spyrosmakrid@NikolopoulosGus@fsu_ntuaHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
“Meet AdaMod: a new deep learning optimizer with memory” by Less Wrighthttps://link.medium.com/CfAtYwooa3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Pham Hoang Nhat proslijedio/la je Tweet
Why do deep ensembles trained with just random initialization work surprisingly well in practice? In our recent paper https://arxiv.org/abs/1912.02757 with
@stanislavfort & Huiyi Hu, we investigate this by using insights from recent work on loss landscape of neural nets. More below:Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
This video explains AdvProp from
@GoogleAI! This technique leverages Adversarial Examples for ImageNet classification by using separate Batch Normalization layers for clean and adversarial mini-batches. https://youtu.be/KTCztkNJm50#100DaysOfMLCodeHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
This got me thinking. It is hard to achieve 1% growth every day. A more believable model is that "today = yesterday * (1+X)" where X is a random variable. The Japanese poster shows the special cases X=0.01 (and X=-0.01) every day. What happens when X is random?https://twitter.com/krishashok/status/1199203257794433025 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
We introduce LOGAN, a game-theory motivated algorithm, which improves the state-of-the-art in GAN image generation by over 30% measured in FID: https://arxiv.org/abs/1912.00953 Here are samples showing higher diversity:pic.twitter.com/GkdRofrYRt
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
"A Simple yet Effective Way for Improving the Performance of GANs" https://arxiv.org/pdf/1911.10979.pdf …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
Thank you, It should be really useful as according to this paper https://arxiv.org/abs/1905.05583 , the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base!
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
AdvProp: One weird trick to use adversarial examples to reduce overfitting. Key idea is to use two BatchNorms, one for normal examples and another one for adversarial examples. Significant gains on ImageNet and other test sets.https://twitter.com/tanmingxing/status/1199046124348116993 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
This feels like a real breakthrough: https://arxiv.org/abs/1911.08265 Take the same basic algorithm as AlphaZero, but now *learning* its own simulator. Beautiful, elegant approach to model-based RL. ... AND ALSO STATE OF THE ART RESULTS! Well done to the team at
@DeepMindAI#MuZeroHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet

#cusignal -#GPU accelerated#signalprocessing with#Scipy#Signal:
Blog: https://medium.com/rapids-ai/gpu-accelerated-signal-processing-with-cusignal-689062a6af8 …
Code: https://github.com/rapidsai/cusignal …
Notebooks/Examples: https://github.com/rapidsai/cusignal/tree/master/notebooks …
Slides:https://drive.google.com/open?id=1rDNJVIHvCpFfNEDB9Gau5MzCN8G77lkH …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
Helping your neural network generalize requires preventing overfitting with these important methods. https://buff.ly/2r5V7ws pic.twitter.com/gwBCMX5TEo
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
Frequent users of gradient penalty (WGAN-GP, StyleGAN, etc.), make sure to try out the new Linfinity hinge gradient penalty from https://arxiv.org/abs/1910.06922 for better results. See https://github.com/AlexiaJM/MaximumMarginGANs … for how to quickly and easily implement it in
#PyTorch.https://twitter.com/jm_alexia/status/1184429680746815488 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
Excited to release a
@PyTorch library for 3D deep learning! Check it out, and give us feedback! Great effort by@krrish94 Edward Smith, JF Lafleche@Caenorst Artem Rozantsev, Tommy Xiang, Gav State,@RevLebaredian@NvidiaAI . We plan to extend it with many exciting features!https://twitter.com/NvidiaAI/status/1194680942536736768 …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Pham Hoang Nhat proslijedio/la je Tweet
I really enjoyed this paper - currently anonymous, but one of the highest scoring in ICLR reviews - that integrates topic models and language models to generate word-level text conditioned on dynamic, sentence-level topic distributions.
#mlwritingmonthhttps://openreview.net/forum?id=Byl1W1rtvH …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.