Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @pragmaticml
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @pragmaticml
-
Prikvačeni tweet
Finetune 0.8.3 is out! It's a minor release only in name. Featuring:
Much more complete documentation!
The small but mighty DistilBERT from the @huggingface team!
GPT-2 774M from @OpenAI! Github: https://github.com/IndicoDataSolutions/finetune … Documentation: https://finetune.indico.ioHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This is a first -- received a *handwritten* note and some beautifully designed stickers in the mail from the wonderful folks
@streamlit! They've clearly put as much attention to detail into their mail as they've put into their product :)pic.twitter.com/mWTz56qyIS
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
An Opinionated Guide to ML Research: “To make breakthroughs with idea-driven research, you need to develop an exceptionally deep understanding of your subject, and a perspective that diverges from the rest of the community—some can do it, but it’s hard.” http://joschu.net/blog/opinionated-guide-ml-research.html …pic.twitter.com/fyO6cyr9im
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Introducing the new Thinc, a refreshing functional take on deep learning!
Static type checking
Mix @PyTorch,@TensorFlow &@ApacheMXNet
Integrated config system
Extensible backends incl. JAX (experimental)
Variable-length sequences & morehttps://thinc.ai Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Super interesting article about the "Dark Secrets of BERT". They did an analysis on what happens in the fine-tuned BERT, in particular on the GLUE tasks. Check it out! I enjoyed reading it!
Article: https://text-machine-lab.github.io/blog/2020/bert-secrets/ …pic.twitter.com/TgLXY34oun
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
[1/2] Excited to present SMART: Semi-Autoregressive Training for Conditional Masked Language Models. SMART closes the performance gap between semi- and fully-autoregressive MT models, while retaining the benefits of fast parallel decoding. With
@omerlevy_@LukeZettlemoyerPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Watch my awesome mentor
@jennwvaughan talk about why it’s important to include people in the ML lifecycle. Her views on human-centered approaches and evaluation of interpretability are exactly how this field should move forward!https://twitter.com/MSFTResearch/status/1220463686914994178 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Scaling Laws for Neural Language Models. OpenAI team found that the loss of LM scales as a power-law with model size, dataset size, and the amount of compute used for training up to seven order of magnitudes. https://arxiv.org/abs/2001.08361
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
The quiet semisupervised revolution continueshttps://twitter.com/D_Berthelot_ML/status/1219823580654948353 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
It's exciting to see alternatives to attention that avoid the O(seq_length^2) complexity. I think the Reformer (or perhaps a variant like
@arankomatsuzaki's k-means based version) is here to stay.https://twitter.com/arankomatsuzaki/status/1219016569000361984 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
JAX's ecosystem is growing fast! https://github.com/google-research/flax/tree/prerelease …pic.twitter.com/bx0zHFk7QF
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Introducing Reformer, an efficiency optimized
#ML architecture, based on the Transformer model for language understanding, that can handle context windows of up to 1 million words, all on a single accelerator with only 16GB of memory. Read all about it ↓https://goo.gle/2treP7rHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
10 ML & NLP Research Highlights of 2019 New blog post on ten ML and NLP research directions that I found exciting and impactful in 2019. https://ruder.io/research-highlights-2019/ …pic.twitter.com/mPoKbkOcOW
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
When Adam Gaier first tried to test weight agnostic neural networks using random weights, we didn’t get any good results. He then made a bug in the code that accidentally forced all the weights to be shared, and suddenly we got much better results, and based our paper on the bug.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Reformer: The Efficient Transformer They present techniques to reduce the time and memory complexity of Transformer, allowing batches of very long sequences (64K) to fit on one GPU. Should pave way for Transformer to be really impactful beyond NLP domain https://openreview.net/forum?id=rkgNKkHtvB …pic.twitter.com/3YwD4A5JQs
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
I wrote a retrospective on human-in-the-loop companies, a specific subset I'm calling worker-in-the-loop: https://akilian.com/2019/12/30/worker-in-the-loop-retrospective …. This draws from my time building Clara for five years, and knowing many other companies in the space.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Ok, people! Are you looking for something to read at the intersection of machine learning and HCI? Three new papers posted online today that you should check out, all with my amazing colleague/BFF
@hannawallach! Ready? I'm gonna try a thread!Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Merry Christmas to those who celebrate and happy holidays from Koala and I! I got some needle felting supplies for Christmas and I can't wait to them out! I know the holidays can be hard on some people and I hope everyone is able to enjoy them as much as possible
pic.twitter.com/KxbOn2l5mC
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Finally can reveal our
#ICLR2020 paper on ELECTRA, much more efficient than existing pretraining, state-of-the-art results; more importantly, trainable with one GPU! Key idea is to have losses on all tokens. Joint work@clark_kev ,@chrmanning,@quocleix. https://openreview.net/forum?id=r1xMH1BtvB … https://twitter.com/colinraffel/status/1197064951174533120 …pic.twitter.com/2MdLJRMmvz
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
I've been working with
@GuggerSylvain for a long time to create something that combines the best of@ProjectJupyter Notebooks with the best of traditional software development approaches. It's called nbdev. We're releasing it today as open source. https://www.fast.ai/2019/12/02/nbdev/ … 1/pic.twitter.com/u9x26L4uRf
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Madison May proslijedio/la je Tweet
Introducing the SHA-RNN :) - Read alternative history as a research genre - Learn of the terrifying tokenization attack that leaves language models perplexed - Get near SotA results on enwik8 in hours on a lone GPU No Sesame Street or Transformers allowed. https://arxiv.org/abs/1911.11423 pic.twitter.com/RN5TPZ3xWH
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.