Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @timothy_lkh_
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @timothy_lkh_
-
Timothy Liu proslijedio/la je Tweet
It's not just PC gaming in the cloud. It's GeForce gaming in the cloud. Give yourself the #PowerToPlay with GeForce NOW — anywhere, any device, on demand. The wait is over. Available now. Learn more → http://nvda.ws/36JsSTL pic.twitter.com/EsNjbKej8xHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
IEEE Fraud
@Kaggle Challenge 1st Place Solution with@rapidsai library: https://www.kaggle.com/cdeotte/rapids-feature-engineering-fraud-0-96 …#ml#ai#ds#machinelearningHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
No love for
@sutdsg?@MOEsg ask boss come support our students. In SUTD we work 25h/day to create a better world by design. Losing housing to make room for LOA students was understandable but devastating@STcom@ChannelNewsAsia cover our storyHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
In January,
@anishathalye,@jjgort, and I ran a short class at@MIT_CSAIL on topics we think are missing in most CS programs — tools we use every day that everyone should know, like bash, git, vim, and tmux. And now the lecture notes and videos are online! https://missing.csail.mit.edu/ pic.twitter.com/xNSlLgJfd4Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
The full set of my 2019 graduate-level computer architecture course lectures at ETH Zurich is online, along with all lecture videos, slides, and course materials: Course schedule: https://lnkd.in/egEFGer Youtube playlist: https://lnkd.in/eaR7J_T First…https://lnkd.in/ewnHEBC
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
Wow: Google's "Meena" chatbot was trained on a full TPUv3 pod (2048 TPU cores) for **30 full days** - That's more than $1,400,000 of compute time to train this chatbot model. (! 100+ petaflops of sustained compute !)pic.twitter.com/BiPdTTG5E9
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
TF and PyTorch are two that are not going to be solved by HIP. cuDNN has a big head start and I don’t see anyone being able to get performance parity without a different approach (tensor compiler like MLIR). Big project. My guess is they punt and backfill with field eng.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
Learn to build an interactive Transformer attention visualization based on
@huggingface and@d3js_org in under 30 minutes! We developed a minimal teaching example for our@MIT_CSAIL IAP class, publicly available here: http://bit.ly/attnvis@sebgehr@davidbau#NLProc#XAI#Vispic.twitter.com/CLOHbRT0vAPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
.
@NVIDIA is the only place crazy enough to conceive of rendering 4K 60+ Hz graphics at home by running a neural network for every frame. DLSS: Trained in@PyTorch, running on the Tensor Cores in RTX GPUs, redefining graphics with AI. And we’re just getting started!https://twitter.com/wiredp/status/1222943514742415360 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
this is mesmerising (brickbrosproductions on insta)pic.twitter.com/Gu6iX9SZ89
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
Tired of looking at your
#cybersecurity logs in silos? Increase the quantity and variety of logs analyzed with@RAPIDSai#CLX. In our blog, we analyze +300k raw alerts in under 3 seconds, including co-occurence analysis and rolling timeseries of alerts -https://nvda.ws/2uA2LB6Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
experienced pretty much the same and i bet others did too. don’t get fooled with the reduced Flops! the speed of depthwise conv can be disappointing!https://twitter.com/timothy_lkh_/status/1221085171212079106 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Updated my blog post on the performance of depth-wise separable convolution, with additional analysis from profiling the GPU kernels and comparisons between GPU, CPU and TPU.https://tlkh.dev/depsep-convs-perf-investigations/ …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
Pascal's paper on the "expected gradient" is published:https://distill.pub/2020/attribution-baselines/ …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
So far, this seemed to give the best context (from a Pulitzer Prize winner who covered SARS):https://www.cnn.com/2020/01/24/opinions/wuhan-coronavirus-china-strategy-garrett/index.html …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
Team #DevSpaceSG wishes our community friends a very Happy Lunar New Year! May good health, good luck, and happiness fill your home throughout the year.
pic.twitter.com/6DLFSEq5Wm
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
Interesting analysis suggesting that the reason for the disappointing performance of many modern CNN architectures is that their depthwise convolutions are memory-bound. https://twitter.com/timothy_lkh_/status/1220686583889719296 …pic.twitter.com/nXafyOseH3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je Tweet
Scaling Laws for Neural Language Models. OpenAI team found that the loss of LM scales as a power-law with model size, dataset size, and the amount of compute used for training up to seven order of magnitudes. https://arxiv.org/abs/2001.08361
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Timothy Liu proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Timothy Liu proslijedio/la je Tweet
My blogpost on how & why we use convolutional neural networks as a model of the visual system is probably the most read thing I've ever written and it's now been expanded & updated into a proper review article, complete with 136 references & 5 new figures! https://arxiv.org/abs/2001.07092 pic.twitter.com/xRqYKptTsF
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.