Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @gsarti_
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @gsarti_
-
Prikvačeni tweet
This Friday at
@SIStatistica#StaTalk2019 hosted by@UniTrieste I'll be presenting a brief overview of recent advances in language modelling, with a focus on language understanding!#NLProc people, did I forgot to mention something? Slides available here: https://drive.google.com/drive/folders/1BrzEQaUjin81B-d_yiWZADBGpR1Km3rs …pic.twitter.com/WpSZCohzDe
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Totally agree with this. Learning based solely on distributional properties of language is deceptive and will hold us back in the long term. Research in grounded communication among agents in naturalistic settings seems the most promising way forward!https://twitter.com/fchollet/status/1224380359615352833 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
This is insane. Has anyone computed the carbon footprint of this? It's time for mandatory checks by an ethical committee, and for redirecting the field towards methods that allow replication.https://twitter.com/eturner303/status/1223976313544773634 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: https://arxiv.org/abs/2001.09977 Blog: https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html …pic.twitter.com/5SOBa58qx3
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
Join us for the 5th Workshop on Representation Learning for NLP at
@aclmeeting#ACL2020 in Seattle! The first call for papers is out now. Deadline is April 6. More info here:https://sites.google.com/view/repl4nlp2020/call-for-papers …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Definitely support the idea of running a shared task for COI detection and reviewer-paper matching. Let the
#NLProc community take advantage of its own skills!https://twitter.com/aclmeeting/status/1220435209662935045 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
The
#EVALITA2020 call for tasks is open! The deadline for submitting your proposal is February 7th 2020!#NLProc@EVALITAcampaign http://www.evalita.it/2020/Call_for_tasks …pic.twitter.com/EnEO75Omde
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
Talking about
#compositionality opens a giant can of worms. One worm: what is it that we compose and where does it come from? What is it that does composition, and where does it come from? I've tried to put some thoughts together on#innateness. /1https://twitter.com/ah__cl/status/1216035206366474240?s=20 …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
2019
was quite the year for Deep Reinforcement Learning. In todays blog post I list my top 10 papers 

https://roberttlange.github.io/posts/2019/12/blog-post-9/ … What was your favourite paper? Let me know!pic.twitter.com/CqD4PqjEKV
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
We are likely to be overestimating the true capabilities of machine commonsense across all these benchmarks. From: https://www.semanticscholar.org/paper/WINOGRANDE%3A-An-Adversarial-Winograd-Schema-at-Scale-Sakaguchi-Bras/8f7133b2e3851b09d659b91e8faa761ec206413f … To appear in
#AAAI2020Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
I often meet research scientists interested in open-sourcing their code/research and asking for advice. Here is a thread for you. First: why should you open-source models along with your paper? Because science is a virtuous circle of knowledge sharing not a zero-sum competitionpic.twitter.com/x16jgKmLFr
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
Here's a thread surveying some 'classic' work on
#compositionality. Lots of people seem to be discussing this right now, but with partial references to the whole story. My aim is to highlight some of the philosophical and psychological issues in the history of the concept. 1/Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Hey
@SemanticScholar, can you add a "Most cited" criterion for sorting paper citations? I often use your platform to delve into new topics, and this can help in finding related content that is highly influential!Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
Very happy to share our latest work accepted at
#ICRL2020: we prove that a Self-Attention layer can express any CNN layer. 1/5
Paper: https://openreview.net/pdf?id=HJlnC1rKPB …
Interactive website : https://epfml.github.io/attention-cnn/
Code: https://github.com/epfml/attention-cnn …
Blog: http://jbcordonnier.com/posts/attention-cnn/ …pic.twitter.com/X1rNS1JvPtPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
New year, new account! Welcome to the Twitter account of the Italian Association of Computational Linguistics! Follow
@AILC_NLP to stay updated on#Italian#NLProc research, projects, tools, events!pic.twitter.com/kAoZN3scA2
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
Questo articolo, caro
@Corriere, e’ uno scandalo. Denigra il merito in modo così farsesco che verrebbe da dubitare della lucidità mentale dell’autore. Da vincitore di#Erc mi vergogno per voi.https://www.corriere.it/opinioni/20_gennaio_04/quei-superfinanziamentiche-sconvolgono-atenei-b9e91602-2f24-11ea-838c-ac55de770e3c.shtml …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
Sul Corriere di oggi c'è un articolo interessante scritto da un accademico del secolo scorso contro il merito e i finanziamenti europei alla ricerca. Tuttavia, poiché è scritto in una lingua un po' arcaica, per leggerlo c'è bisogno di una traduzione. [thread di xx tweet
]Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
L’intelligenza artificiale è la frontiera della tecnologia che promette più evoluzioni e prospettive occupazionali. A
#Trieste i professionisti del settore si formano già all’università. Intervista a Luca Bortolussi, coordinatore corso di laurea in Data Science dell’@UniTriestepic.twitter.com/UwtWNy4yFEHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Correction - Stating that the XOR cannot be computed by a 1-layer perceptron is false; it is easy to achieve this by using Gaussian nonlinearities. However, these have proven ineffective for large ANN. The statement holds only for current activations (ReLU, softmax, etc.)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Big news: single human neurons can compute XOR. The same result can only be achieved by a 2-layer ANN and is unique among all species. A gentle reminder that, despite current outstanding results in deep learning, much work has still to be done!https://twitter.com/jaaanaru/status/1212810741549600768 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Gabriele Sarti proslijedio/la je Tweet
The 2010s were an eventful decade for NLP! Here are ten shocking developments since 2010, and 13 papers* illustrating them, that have changed the field almost beyond recognition. (* in the spirit of
@iamtrask and@FelixHill84, exclusively from other groups :)).Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

