Wow! That's a compelling argument to start working on NN if you haven't yet (graph #2 much moreso than #1)
-
-
-
There's at least compelling argument that something interesting is happening here. Maybe we'll learn something from these models that will grow our understanding of the task that will be transferable to traditional approaches. Or we will get stronger traditional runs next year.
- Još 1 odgovor
Novi razgovor -
-
-
Shameless plug: The Deep Learning track will be back @
#TREC2020. If you are interested in DL for search, please participate. 2019 website: https://microsoft.github.io/TREC-2019-Deep-Learning/ … Also, if you are new to the area check out.. Tutorial slides: http://bit.ly/deeplearning4search-fire2019 … Book: http://bit.ly/fntir-neuralPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
What if BM25, but with SentencePiece / Wordpiece / BPE tokenization? I haven't read paper(s) yet, did someone try that?
-
I think this is definitely worth trying. But there's evidence that much of the benefits from neural models is due to learned text representation for matching. In traditional IR, I would expect the analogous direction is RM3/PRF. So, may be a fresh look at that is worth it.
Kraj razgovora
Novi razgovor -
-
-
Thanks! Can you please post a link to the paper?
-
Slide is from http://bit.ly/deeplearning4search-fire2019 …. I've been updating plot 1 over the last few years--earlier version published in http://bit.ly/fntir-neural . Plot 2 is from TREC 2019 Deep Learning track (https://microsoft.github.io/TREC-2019-Deep-Learning/ …) overview paper which will be publicly available in February-ish.
- Još 1 odgovor
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.