মিডিয়া
- টুইট
- টুইট এবং উত্তর
- মিডিয়া, বর্তমান পৃষ্ঠা।
-
-
-
Eunsoo Choi kicks of the Machine Reading for Question Answering workshop at
#acl2018#NLProcpic.twitter.com/8tFxE2m8jm
এই থ্রেডটি দেখান -
.
@RobinJia1 presents his,@pranavrajpurkar &@percyliang’s work on SQuAD2.0 – the advantages of a QA dataset with unanswerable questions.#ACL2018pic.twitter.com/FY20sxMJz2
-
Tasks like language generation & QA are held back by low quality automatic & expensive human evaluation. Can you do better by combining them using control variates? Yes, but only a little.
@arunchaganty, Stephen Mussmann,@percyliang#NLProc#ACL2018 https://nlp.stanford.edu/pubs/chaganty2018price.pdf …pic.twitter.com/tUvWX2rAbi
-
A minor extension of
@TDozat &@chrmanning’s 2017 arc-factored dependency parser—use sigmoid instead of softmax—beats the previous SoTA for “semantic” dependency parsing (DM, PAS, PSD). Char models and lemmas gives you a percent more.#NLProc#ACL2018 https://arxiv.org/abs/1807.01396 pic.twitter.com/Svut3yrGzf
-
Sharp Nearby, Fuzzy Far Away: An LSTM Neural Language Model remembers out to about 200 words, remembering word order for about 50 words & more results…. At
#ACL2017 by@ukhndlwl,@hhexiy, Peng Qi &@jurafsky. https://nlp.stanford.edu/pubs/khandelwal2018lm.pdf …#NLProcpic.twitter.com/EJfA1GTkrm
-
How can you teach a machine learning system with human language rather than “labels”? With a semantic parser & labeling functions! New
#ACL2018 paper by@bradenjhancock@paroma_varma@stephtwang@bringmartino@percyliang & Chris Ré@HazyResearch https://nlp.stanford.edu/pubs/hancock2018babble.pdf …#NLProcpic.twitter.com/VUsUdF2RjM
-
CoreNLP doesn’t do too badly (though using a hashtag for a person is a bit too unusual…): http://corenlp.run/ pic.twitter.com/S8mNoSB2GD
-
Since 2016, SQuAD has been the key textual question answering benchmark, used by top AI groups & featured in AI Index—https://aiindex.org/ —Today
@pranavrajpurkar, Robin Jia &@percyliang release SQuAD2.0 with 50K unanswerable Qs to test understanding: https://stanford-qa.com/ pic.twitter.com/gTCvvFVcsm
-
Congratulations to
@mattthemathman and colleagues at@allenai_org/@uwnlp on the@NAACLHLT best paper award for ELMo! In some ways an easy choice, given the huge impact the paper has already had—another speed of progress win from preprints.#NLProc https://allennlp.org/elmo pic.twitter.com/DoRaaMmIjS
-
Can computers detect mental illness from human language use? Automatic Detection of Incoherent Speech for Diagnosing Schizophrenia by
@dan_iter, Jong H Yoon &@jurafsky. CL & Clinical Psych#NAACL2018 workshop https://nlp.stanford.edu/pubs/iter2018shizophrenia.pdf …pic.twitter.com/MxtdQnylG6
-
How do you get a good corpus for grammar correction via translation? By Noising and Denoising Natural Language.
@ziangx1e, Guillaume Genthial,@stan_xie,@AndrewYNg &@jurafsky#NLProc#NAACL2018 https://nlp.stanford.edu/pubs/xie2018denoising.pdf …pic.twitter.com/bp6HnsTsL5
-
Interpretable social science: Find the words that really indicate something, avoiding topic correlations and confounds—Reid Pryzant, Kelly Shen,
@jurafsky, Stefan Wager#NLProc#NAACL2018 https://nlp.stanford.edu/pubs/pryzant2018lexicon.pdf …pic.twitter.com/x0QaPur9py
-
Delete, Retrieve, Generate: A simple approach to doing neural style transfer on text, altering text for sentiment or style—Juncen Li, Robin Jia, He He &
@percyliang#NAACL2018 https://nlp.stanford.edu/pubs/li2018transfer.pdf …pic.twitter.com/xQHStGoVg9
-
.
@TDozat successfully defended his dissertation on Graph-Based Biaffine Dependency Parsing today. Congratulations Tim!pic.twitter.com/C1hpPOiidO
-
Nice article on methods for using distributed representations to capture graph structure in
@gradientpub, a new, accessible magazine by@Stanford AI students. The first methods drew from#NLProc but maybe with new GCN methods, we’re borrowing back. https://thegradient.pub/structure-learning/ …pic.twitter.com/Hnz7Ofh1JD
-
Start of
@stanfordnlp summer conference papers:@sebschu,@chrmanning &@JoakimNivre on Sentences with Gapping: Parsing and Reconstructing Elided Predicates. A task where previous work got 0% – see section 4.4.#NLProc https://nlp.stanford.edu/pubs/schuster2018gapping.pdf …pic.twitter.com/eAHMQeiOhX
-
Little detail from
#GoogleIO for#NLProc geeks, first announced last year: use of named entity recognition to power smart text selection. Okay, also text generation, text-to-speech, and more conversational dialog but let’s applaud small affordances.https://www.youtube.com/watch?v=oh5gnvrJ658 … -
You know how to do NLP. But do you consider fairness and ethical implications in your
#NLProc research? Learn the latest on Socially Responsible NLP from Yulia Tsvetkov, Vinod Prabhakaran and@rfpvjr—Jun 1 afternoon@NAACLHLT tutorial. https://sites.google.com/view/srnlp pic.twitter.com/O3raONI0RY
লোড হতে বেশ কিছুক্ষণ সময় নিচ্ছে।
টুইটার তার ক্ষমতার বাইরে চলে গেছে বা কোনো সাময়িক সমস্যার সম্মুখীন হয়েছে আবার চেষ্টা করুন বা আরও তথ্যের জন্য টুইটারের স্থিতি দেখুন।