মিডিয়া
- টুইট
- টুইট এবং উত্তর
- মিডিয়া, বর্তমান পৃষ্ঠা।
-
Schedule of our
@acl2018 NMT workshop has been updated! Come tomorrow (room 211) to hear about: efficient NMT, discourse, sparsity, and controllable language generation https://sites.google.com/site/wnmt18/schedule …@gneubig@alexandrabirch1pic.twitter.com/5Ui3tS3mWb
-
Our QANet model continues to march strong with its #1 position on SQuAD! If you're interested in knowing more about QANet, check out the slides I prepared lately https://goo.gl/RQpyTr & chat with me and
@AdamsYu at#ACL2018.pic.twitter.com/NwDz9mZYrb
-
QANet is also #1 on DAWNBench for fast training time, barring some mismatches on frameworks and hardware. Results courtesy of
@dmdohan. https://dawn.cs.stanford.edu/benchmark/#squad …pic.twitter.com/8OJaNK2nc2
এই থ্রেডটি দেখান -
"Translation models naturally make your text datasets bigger" is also one of the key ideas hidden in our paper :) Data augmentation (DA) is popular in speech and vision, but not NLP. Here, we show how DA can help an NLP task.pic.twitter.com/jNJje7nAyN
এই থ্রেডটি দেখান -
Our QANet architecture is the deepest NLP model ever built in the literature (I believe; previous models < 30 layers). By designing a homogenous architecture for encoders of fast layers, we can stack 130+ layers.pic.twitter.com/Jz4FjTj8Go
এই থ্রেডটি দেখান -
Happy to announce our QANet models, #1 on
@stanfordnlp question answering dataset (SQuAD). 3 ideas: deep & fast arch (130+ layers), data augmentation, transfer learning. Joint work /w@AdamsYu@dmdohan@oahziur, Quoc Le, et al. See our#ICLR2018 paper https://openreview.net/pdf?id=B14TlG-RW …pic.twitter.com/wSdKHp4nCt
এই থ্রেডটি দেখান -
Excited to share a new work by
#GoogleAI resident@thtrieu_ (with@iamandrewdai, me, & Quoc Le) on training very long RNNs (up to 16K long). See paper for extreme cases of zero or little backprop on RNNs ;) https://arxiv.org/pdf/1803.00144.pdf …pic.twitter.com/kz3M0YfilB
-
Great
#ML4H talk by@drfeifei on "the dark spaces of healthcare": hand hygiene compliance, ICU optimization & senior care with efficiency & privacy in mind. Looking forward to the full slides!#NIPS2017pic.twitter.com/FrBmiWuBxP
-
#GoogleAI machine learned cookie at#NIPS2017 with recipe attached :)pic.twitter.com/qsvG3bZolf
-
Very nice
#NIPS2017 talk by@pabbeel on progresses and challenges for the future of robotics!pic.twitter.com/wiN5KMAFYm
-
#NIPS2017 Tutorial on "Deep probabilistic modeling with Gaussian Processes" by@lawrennd. Slides http://inverseprobability.com/talks/lawrence-nips17/deep-probabilistic-modelling-with-gaussian-processes.html …pic.twitter.com/29CykQdRMH
-
omg, registration line at
#NIPS2017 ... Time to read a paper :)pic.twitter.com/NS6ufOauTW
-
Nice work on neural editor
@ke1vin@stanfordnlp: manipulate sent through latent vec. Key premise: there exist related sents in large corporapic.twitter.com/ifpKKoGKiH
এই থ্রেডটি দেখান -
A fun picture to look at on Machine Intelligence landscape by
@shivon given zillions of AI companies nowadays :) https://www.oreilly.com/ideas/the-current-state-of-machine-intelligence-3-0 …pic.twitter.com/Jtn4t1Gwdr
-
Interested in coming to
#ACL2017? Submit to our#NMT workshop; great speakers to talk about NMT future!@gneubig https://sites.google.com/site/acl17nmt/ pic.twitter.com/Gv2XI3lsjD
-
Proud to have a talented
@GoogleBrain member from Iran! To quote Ashish: "No immigrants, no AI", I think it's true to some extent.pic.twitter.com/ZyljOygWua
-
Marching to support affected Googlers, support victims of the
#MuslimBan, and support democracy!pic.twitter.com/bYRryI5S2s – Googleplex-এ
-
@kchonyc@AndrewYNg In fact, it dates back to Allen'87 & Chrisman'92 as in our NMT tutorial :)@chrmanning https://sites.google.com/site/acl16nmt pic.twitter.com/5bzb2Qlccn
-
and I have just spotted two Waymo cars in Mountain View :)pic.twitter.com/w7mSytEk0K
লোড হতে বেশ কিছুক্ষণ সময় নিচ্ছে।
টুইটার তার ক্ষমতার বাইরে চলে গেছে বা কোনো সাময়িক সমস্যার সম্মুখীন হয়েছে আবার চেষ্টা করুন বা আরও তথ্যের জন্য টুইটারের স্থিতি দেখুন।