Papers with Code

@paperswithcode

A free resource for researchers and practitioners to find and follow the latest state-of-the-art ML papers and code.

London, UK
Vrijeme pridruživanja: prosinac 2018.

Tweetovi

Blokirali ste korisnika/cu @paperswithcode

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @paperswithcode

  1. 17. sij

    To view these results in context, you can view the Cityscapes benchmark here: .

    Prikaži ovu nit
    Poništi
  2. 17. sij

    The authors of HRNet (OCR + SegFix), first on the Cityscapes leaderboard for semantic segmentation, have released their source code. Get the paper, code and results here:

    Prikaži ovu nit
    Poništi
  3. 17. sij

    📈 We've updated the leaderboard graphs. Now much easier to see the methods that contributed to task progress over time.

    Poništi
  4. 13. pro 2019.
    Poništi
  5. 6. pro 2019.

    NeurIPS 2019 Implementations - get all the papers with code in one place here:

    Poništi
  6. proslijedio/la je Tweet
    12. stu 2019.

    Another view of Noisy Student: semi-supervised learning is great even when labeled data is plentiful! 130M unlabeled images yields 1% gain over previous ImageNet SOTA that uses 3.5B weakly labeled examples! joint work /w , Ed Hovy,

    Poništi
  7. 31. lis 2019.

    Repositories are classified by framework by inspecting the contents of every GitHub repository and checking for imports in the code. This differs from previous analyses which used proxies for usage like paper mentions.

    Prikaži ovu nit
    Poništi
  8. 31. lis 2019.

    Trends 📈 - track the popularity of deep learning frameworks for paper implementations. Current share in Q3 2019: PyTorch 38% (up 6%), TensorFlow 22% (down 2%), other 39% (down 3%)

    Prikaži ovu nit
    Poništi
  9. 28. lis 2019.

    ICCV 2019 Implementations - get all the papers with code in one place here:

    Poništi
  10. 24. lis 2019.

    New state-of-the-art for several NLP tasks: Text-to-Text Transfer Transformer (T5). Combines insights from a systematic study of transfer learning in NLP, introduces and uses a new 745GB corpus (C4), and scales up model sizes. Code and comparisons here:

    Poništi
  11. proslijedio/la je Tweet
    23. lis 2019.

    New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: Code/models/data/etc: Summary ⬇️ (1/14)

    Prikaži ovu nit
    Poništi
  12. 10. lis 2019.

    🎉 Introducing sotabench : a new service with the mission of benchmarking every open source ML model. We run GitHub repos on free GPU servers to capture their results: compare to papers, other models and see speed/accuracy trade-offs. Check it out:

    Poništi
  13. 4. lis 2019.

    Join us next Thursday at the developer conference for an exciting update on Papers With Code and where we are headed next...

    Poništi
  14. 16. ruj 2019.

    New state-of-the-art for object detection on COCO. Liu et al introduce a composite backbone architecture that extracts more representational basic features than the original backbone (trained for image classification). Code & comparisons here:

    Poništi
  15. proslijedio/la je Tweet
    6. ruj 2019.

    With a bigger training set, our 8.3B parameter GPT-2 model now gets a WikiText-103 perplexity of 10.8 (previous SOTA 16.3), and a Lambada whole word accuracy of 66.5% (previous SOTA 63.24%). Updated results in our blog post:

    Poništi
  16. 16. kol 2019.

    Evaluation results and more information about RoBERTa can be accessed at

    Prikaži ovu nit
    Poništi
  17. 16. kol 2019.
    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    13. kol 2019.

    Here’s how we trained an 8.3B parameter GPT-2. We alternate row- and column- partitioning in the Transformer in order to remove synchronization and use hybrid model/data parallelism. 15 PFlops sustained on 512 GPUs. Details and code:

    Poništi
  19. 5. kol 2019.

    An Animated History of ImageNet : from AlexNet to FixResNeXt-101. See the full table and add more results here:

    Poništi
  20. proslijedio/la je Tweet

    FixResNeXt is currently #1 in the Image Classification on ImageNet leaderboard! We propose a simple & efficient strategy to jointly optimize the train and test resolutions, which improves the classifier accuracy and/or reduces the training time.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·