Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @paperswithcode
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @paperswithcode
-
To view these results in context, you can view the Cityscapes benchmark here: https://paperswithcode.com/sota/semantic-segmentation-on-cityscapes ….pic.twitter.com/OKs07ou7jk
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
The authors of HRNet (OCR + SegFix), first on the Cityscapes leaderboard for semantic segmentation, have released their source code. Get the paper, code and results here: https://paperswithcode.com/paper/object-contextual-representations-for …https://twitter.com/RainbowMcDreamy/status/1218215963335675905?s=20 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We've updated the leaderboard graphs. Now much easier to see the methods that contributed to task progress over time. https://paperswithcode.com/sota/image-classification-on-imagenet …pic.twitter.com/55K9c68CuRHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We’re excited to announce we’re joining Facebook AI! Read more about it here: https://medium.com/paperswithcode/papers-with-code-is-joining-facebook-ai-90b51055f694 …
@facebookaipic.twitter.com/WSs0UFkPok
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
NeurIPS 2019 Implementations - get all the papers with code in one place here: https://paperswithcode.com/conference/neurips-2019-12 …pic.twitter.com/Bxu91y32pl
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Papers with Code proslijedio/la je Tweet
Another view of Noisy Student: semi-supervised learning is great even when labeled data is plentiful! 130M unlabeled images yields 1% gain over previous ImageNet SOTA that uses 3.5B weakly labeled examples! joint work /w
@QizheXie, Ed Hovy,@quocleix https://paperswithcode.com/sota/image-classification-on-imagenet …https://twitter.com/quocleix/status/1194334947156193280 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Repositories are classified by framework by inspecting the contents of every GitHub repository and checking for imports in the code. This differs from previous analyses which used proxies for usage like paper mentions.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Trends
- track the popularity of deep learning frameworks for paper implementations. Current share in Q3 2019: PyTorch 38% (up 6%), TensorFlow 22% (down 2%), other 39% (down 3%) https://paperswithcode.com/trends pic.twitter.com/VIEnKT7Qbf
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
ICCV 2019 Implementations - get all the papers with code in one place here: https://paperswithcode.com/conference/iccv-2019-10 …pic.twitter.com/rpQBPGy22w
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
New state-of-the-art for several NLP tasks: Text-to-Text Transfer Transformer (T5). Combines insights from a systematic study of transfer learning in NLP, introduces and uses a new 745GB corpus (C4), and scales up model sizes. Code and comparisons here: https://paperswithcode.com/paper/exploring-the-limits-of-transfer-learning …pic.twitter.com/lgzO2TS5re
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Papers with Code proslijedio/la je Tweet
New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: https://arxiv.org/abs/1910.10683 Code/models/data/etc: https://git.io/Je0cZ Summary
(1/14)pic.twitter.com/VP1nkkHefB
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Introducing sotabench : a new service with the mission of benchmarking every open source ML model. We run GitHub repos on free GPU servers to capture their results: compare to papers, other models and see speed/accuracy trade-offs. Check it out:http://sotabench.com Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Join us next Thursday at the
@PyTorch developer conference for an exciting update on Papers With Code and where we are headed next... https://pytorch.fbreg.com/schedule pic.twitter.com/chijONJUk3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
New state-of-the-art for object detection on COCO. Liu et al introduce a composite backbone architecture that extracts more representational basic features than the original backbone (trained for image classification). Code & comparisons here: https://paperswithcode.com/paper/cbnet-a-novel-composite-backbone-network …pic.twitter.com/vZpbYeOd58
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Papers with Code proslijedio/la je Tweet
With a bigger training set, our 8.3B parameter GPT-2 model now gets a WikiText-103 perplexity of 10.8 (previous SOTA 16.3), and a Lambada whole word accuracy of 66.5% (previous SOTA 63.24%). Updated results in our blog post:https://nv-adlr.github.io/MegatronLM
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Evaluation results and more information about RoBERTa can be accessed athttps://paperswithcode.com/paper/roberta-a-robustly-optimized-bert-pretraining …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
RoBERTa is now included in the
@huggingface library.https://twitter.com/huggingface/status/1162346749194903553?s=21 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Papers with Code proslijedio/la je Tweet
Here’s how we trained an 8.3B parameter GPT-2. We alternate row- and column- partitioning in the Transformer in order to remove synchronization and use hybrid model/data parallelism. 15 PFlops sustained on 512 GPUs. Details and code: https://nv-adlr.github.io/MegatronLM pic.twitter.com/sEk4q0hU7T
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
An Animated History of ImageNet : from AlexNet to FixResNeXt-101. See the full table and add more results here: https://www.paperswithcode.com/sota/image-classification-on-imagenet …pic.twitter.com/zmyXWrXyAJ
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Papers with Code proslijedio/la je Tweet
FixResNeXt is currently #1 in the Image Classification on ImageNet leaderboard! We propose a simple & efficient strategy to jointly optimize the train and test resolutions, which improves the classifier accuracy and/or reduces the training time. https://paperswithcode.com/sota/image-classification-on-imagenet …pic.twitter.com/9x5uxxs76F
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
PyTorch-Transformers 1.1.0 is live