Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
Blokirali ste korisnika/cu @bassed1984
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @bassed1984
-
Ed Hsu 許晉源 proslijedio/la je Tweet
Organizations across industries need accurate short-term, tactical forecasts, such as the amount of goods to be ordered and number of employees needed, to keep pace with their growth.https://ubere.ng/2DJv58r
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
The Plato Research Dialogue System enables experts and non-experts alike to quickly build, train, and deploy conversational AI agents.https://ubere.ng/2xOcCBR
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
We're releasing mBART, a new seq2seq multilingual pretraining system for machine translation across 25 languages. It gives significant improvements for document-level translation and low-resource languages. Read our paper to learn more: https://arxiv.org/pdf/2001.08210.pdf …pic.twitter.com/tJbRcOTqik
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
Today we announce a novel, open-source method for text generation tasks (e.g., summarization or sentence fusion), which uses edit operations instead of generating text from scratch, leading to less errors and faster model execution. Read about it below.https://goo.gle/38XfRXU
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
Ever wanted to combine the NLU superpowers of BERT with the generation superpowers of GPT-2? It's now possible in transformers thanks to
@remilouf! https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8 …pic.twitter.com/xlsMAi7ipY
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
To make online search results more useful for training
#AI, scientists at Facebook AI are condensing the raw text of those results into knowledge graphs for more efficient processing. https://ai.facebook.com/blog/research-in-brief-training-ai-to-answer-questions-using-compressed-search-results/ …pic.twitter.com/5QbFkA5yv1Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
How can we learn a sequence of tasks without forgetting, without class labels and with unknown or ambiguous task boundaries? Continual Unsupervised Representation Learning: Paper: https://arxiv.org/abs/1910.14481 Code: https://github.com/deepmind/deepmind-research/tree/master/curl …pic.twitter.com/3WSzWmILlB
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
Open Set Recognition Through Deep Neural Network Uncertainty: Does Out-of-Distribution Detection Require Generative Classifiers? Mundt et al.: http://openaccess.thecvf.com/content_ICCVW_2019/papers/SDL-CV/Mundt_Open_Set_Recognition_Through_Deep_Neural_Network_Uncertainty_Does_Out-of-Distribution_ICCVW_2019_paper.pdf … GitHub: https://github.com/MrtnMndt/Deep_Openset_Recognition_through_Uncertainty …
#ArtificialIntelligence#DeepLearning#MachineLearningpic.twitter.com/nJ9k21NssO
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
The very impressive new ConvoKit from
@Cristian_DNM and his Cornell NLP crew provides easy access to lots of conversational datasets and tools: https://convokit.cornell.eduHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
[1911.02150] Fast Transformer Decoding: One Write-Head is All You Need https://arxiv.org/abs/1911.02150
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
[1910.08282] Unsupervised Context Rewriting for Open Domain Conversation https://arxiv.org/abs/1910.08282
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
How can computers answer questions with multi-step information needs? How can it be done efficiently and interpretably?
@qi2peng2 and colleagues explain at#emnlp2019. Paper: https://arxiv.org/abs/1910.07000 Blog post: http://ai.stanford.edu/blog/answering-complex-questions/ …#NLProcHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
[1904.09675] BERTScore: Evaluating Text Generation with BERT https://arxiv.org/abs/1904.09675
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
Who said that training GPT-2 or BERT was expensive? "We use 512 Nvidia V100 GPUs [...] Upon the submission of this paper, training has lasted for three months [...] and perplexity on the development set is still dropping."https://openreview.net/forum?id=Bkl8YR4YDB …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
A View on the Evolution of Representations in the Transformer from the Information Bottleneck Perspective: a post on the EMNLP 2019 paper https://lena-voita.github.io/posts/emnlp19_evolution.html …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Introducing a Conditional Transformer Language Model for Controllable Generationhttps://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/ …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
The excellent interactive book "Dive into Deep Learning" has been ported to PyTorch by students at IIT Roorkee https://github.com/dsgiitr/d2l-pytorch … The book was authored by
@astonzhangAZ,@zacharylipton@mli65 et. al.https://twitter.com/mza/status/1121465916699619329 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
OpenGPT-2: We Replicated GPT-2 Because You Can Too! 1.5B Model Weights Released. Blogpost: https://medium.com/@vanya_cohen/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc … Colab:https://colab.research.google.com/drive/1esbpDOorf7DQJV8GXWON24c-EQrSKOit …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
Facebook AI researchers are sharing an all-attention layer to simplify the Transformer model and an adaptive attention span method to make it more efficient. Even with a much simpler architecture, these methods match or improve state-of-the-art results. https://ai.facebook.com/blog/making-transformer-networks-simpler-and-more-efficient/ …pic.twitter.com/349bXY4dr2
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Ed Hsu 許晉源 proslijedio/la je Tweet
Introducing Abductive-NLI! A new
#CommonSenseAI benchmark dataset to test an#AI's abductive reasoning & common sense in forming explanations for a set of observations. Paper: https://arxiv.org/abs/1908.05739 Leaderboard and Data: https://leaderboard.allenai.org/anli/submissions/get-started …#MachineReasoning#AI2#NLPpic.twitter.com/2BEf0aWYkh
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.