Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @michalwols
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @michalwols
-
Michal Wolski proslijedio/la je Tweet
Just to clarify some misconceptions from this thread: PyTorch has supported higher-order (reverse-mode) differentiation for years now, while forward mode and auto-vectorization (without restriction to a functional subset of Python!) is on its way!
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
Finally can reveal our
#ICLR2020 paper on ELECTRA, much more efficient than existing pretraining, state-of-the-art results; more importantly, trainable with one GPU! Key idea is to have losses on all tokens. Joint work@clark_kev ,@chrmanning,@quocleix. https://openreview.net/forum?id=r1xMH1BtvB … https://twitter.com/colinraffel/status/1197064951174533120 …pic.twitter.com/2MdLJRMmvz
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
And it looks like this: https://www.codepedia.org/ama/a-cleaner-multi-stage-deployment-on-kubernetes-of-a-create-react-app-with-kustomize-helm-and-skaffold … Render your html with 4gigs of js, in a docker container on kubernetes using kustomize and skaffold.
#webscale!Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Michal Wolski proslijedio/la je Tweet
By unnoticed you mean published for just a year?https://arxiv.org/abs/1812.11118
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
Here are the results of this poll: Sanders 24% Warren 22% Biden 14% Buttigieg 12% Now look at the headline. It takes the LA Times three paragraphs to mention who is leading. https://www.latimes.com/politics/story/2019-12-05/democrats-2020-race-california-poll …pic.twitter.com/gi1hw0KN8M
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
That Bernie grasped this so clearly in 1991 is nothing short of remarkable. Whatever you think of
#Bernie2020, please watch this. It's hard not to contemplate all that we have lost by treating people like Sanders like a crank (and Larry Summers like a prophet). Can we stop now??? https://twitter.com/_ericblanc/status/1201501049321902080 …Tweet je nedostupan.Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
SuperGlue: Learning Feature Matching with Graph Neural Networks. “A neural model that simultaneously performs context aggregation, feature matching, and filtering in a single unified architecture.” https://arxiv.org/abs/1911.11763
#ComputerVision#Roboticspic.twitter.com/4U8cVrUQ9g
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
We also introduce a technique [https://arxiv.org/abs/1911.11134 ] for training neural networks that are sparse throughout training from a random initialization - no luck required, all initialization “tickets” are winners.pic.twitter.com/fA7VmXrj20
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
Code release for
#NeurIPS2019 paper on Learning Data Manipulation: Learning to augment and re-weight data in low data regime or in presence of imbalanced labels. https://arxiv.org/abs/1910.12795 Code: https://github.com/tanyuqian/learning-data-manipulation … via@ZhitingHu & Bowen Tan.Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
The TLDR of the paper; use adversarial examples as training data augmentation, maintain separate BatchNorm for normal vs adversarial examples. Neat. As usual I've ported & tested
#PyTorch weightshttps://github.com/rwightman/gen-efficientnet-pytorch …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
Excited to share our work on efficient neural architectures for object detection! New state-of-the-art accuracy (51 mAP on COCO for single-model single-scale), with an order-of-magnitude better efficiency! Collaborated with
@quocleix and@ruomingpang.https://twitter.com/quocleix/status/1197682880173920256 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
How Americans put up with their horrible health insurance non-system is beyond me. No one else in the developed world ever has to go through this. No one. Ever.https://www.theguardian.com/us-news/2019/nov/14/health-insurance-medical-bankruptcy-debt?CMP=share_btn_fb …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link: https://arxiv.org/abs/1911.04252 pic.twitter.com/0umSnX7wui
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
Our new paper: Unsupervised Cross-lingual Representation Learning at Scale https://arxiv.org/pdf/1911.02116.pdf … We release XLM-R, a Transformer MLM trained in 100 langs on 2.5 TB of text data. Double digit gains on XLU benchmarks + strong per-language performance (~XLNet on GLUE). [1/6]pic.twitter.com/0RX1ljGuri
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je Tweet
Our new pretrain model, which gives SoTA on all the generations task ( with gains of up to 6 ROUGE) while matches the performance of RoBERTa on NLU. Our 400M parameter model outperforms recent T5 770M model on NLU and 11B on CNN/DM.https://twitter.com/arxiv_cs_cl/status/1189707209690796032 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Michal Wolski proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.