Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
Blokirali ste korisnika/cu @Melison48436887
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Melison48436887
-
Melison proslijedio/la je Tweet
Open-domain conversation is an extremely difficult task for ML systems. Meena is a research effort at
@GoogleAI in this area. It's challenging, but we are making progress towards more fluent and sensible conversations. Nice work, Daniel,@lmthang & everyone involved! https://twitter.com/GoogleAI/status/1222230622355087360 …pic.twitter.com/vjkXoSUHf3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Let me highlight this amazing work I've read recently on
#compositionality in NLP, in which you'll find both: - a deep discussion of what it means for a neural model to be compositional - a deep and insightful comparison of LSTM, ConvNet & Transformers!
https://arxiv.org/abs/1908.08351 pic.twitter.com/LX9JQE1Ira
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Examining the Benefits of Capsule Neural Networks https://deepai.org/publication/examining-the-benefits-of-capsule-neural-networks … by Arjun Punjabi et al.
#Vector#ConvolutionalNeuralNetworksHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Melison proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Melison proslijedio/la je Tweet
A Gentle Introduction to Deep Learning for Graphshttps://www.reddit.com/r/MachineLearning/comments/eu4ibo/r_a_gentle_introduction_to_deep_learning_for/ …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Very happy to share our latest work accepted at
#ICRL2020: we prove that a Self-Attention layer can express any CNN layer. 1/5
Paper: https://openreview.net/pdf?id=HJlnC1rKPB …
Interactive website : https://epfml.github.io/attention-cnn/
Code: https://github.com/epfml/attention-cnn …
Blog: http://jbcordonnier.com/posts/attention-cnn/ …pic.twitter.com/X1rNS1JvPtPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
If I was 16 again, here's what I'd do: *Learn Copywriting/media buying *Start working out *take up a martial art *learn NLP/hypnosis *get a job *save 50% *work on building a good social circle Focus on MASTERING social, and high-income skills. And, remember Assets > asses
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Excited to see our DistilBERT paper accepted at NeurIPS 2019 ECM^2 wkshp! 40% smaller 60% faster than BERT => 97% of the performance on GLUE w. a triple loss signal
We also distilled GPT2 in an 82M params model
https://arxiv.org/abs/1910.01108
Code&weights: https://github.com/huggingface/transformers …pic.twitter.com/nSB82ELBWD
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Rethinking Data Augmentation: Self-Supervision and Self-Distillation https://deepai.org/publication/rethinking-data-augmentation-self-supervision-and-self-distillation … by Hankook Lee et al. including
@sjinu37#NeuralNetwork#JointDistributionHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Does BERT make BiLSTMs obsolete? Not so fast, say
@ralph_tang,@LuYao_NLP,@likicode et al. You can distill BERT knowledge into BiLSTMs. Gives up some prediction accuracy for 100x fewer parameters and 15x faster inference. https://arxiv.org/abs/1903.12136 pic.twitter.com/2k0AEHMysL
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
3/ Jiao et al distill BERT to a network that is 7.5X smaller at 9.4X faster with no loss in accuracy[1].
@julien_c and the@huggingface team achieve similar results and have open sourced their methods[2]. [1]https://arxiv.org/abs/1909.10351 [2]https://github.com/huggingface/transformers/tree/master/examples/distillation …pic.twitter.com/ImB94yAU5g
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Replace the word vectors or the featurizer (e.g., BiLSTM) with BERT in a graph-based parser? Need to pay some attention to wordpiece-to-word mapping though. Plug: we have a graph-based parser in PyTorch at https://github.com/stanfordnlp/stanfordnlp … !
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
I'll Keep Coming! A Snake, a Bird, and a Little Something Else. All made again using only Procedural Animations.
#madewithunity#unity#indiegamedev#gamedev#indiedevpic.twitter.com/IBk4y7AgFzHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
Neuroscience-inspired online unsupervised learning algorithms https://deepai.org/publication/neuroscience-inspired-online-unsupervised-learning-algorithms … by Cengiz Pehlevan and
@chklovskii#UnsupervisedLearning#FeatureExtractionHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
SpanBERT: a new pre-training method for better span representation! (w/
@danqi_chen@yinhanl@dsweld@LukeZettlemoyer@omerlevy_) Big gains on QA, SoTA on Coref and TACRED. Better pre-training tasks and objectives without any extra data/params. http://arxiv.org/abs/1907.10529 (1/6)pic.twitter.com/O1jfP4LJUS
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Melison proslijedio/la je Tweet
We ported @openai's GPT-2 to run on-device (using Swift and CoreML on iOS)
Large transformers models can now live on the edge. 
The video below is GPT-2 running locally (no network) on the device!
Code: https://github.com/huggingface/swift-coreml-transformers …
Built w/ @LysandreJik at@huggingfacepic.twitter.com/wcpH7zTpFkPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
.