I think you accidentally tagged the wrong person, should be @sandeep1337
-
-
-
Thanks, apologies to all!
Kraj razgovora
Novi razgovor -
-
-
It looks like a big part of the "generated" abstract is almost verbatim passages from the end of the introduction (as
@tchakra2 pointed out) May be the network "learned" that lazy abstracts involve repeating the end of a well written introduction? https://arxiv.org/pdf/1909.03186.pdf …pic.twitter.com/XzlYUJR6Qz
- Još 3 druga odgovora
Novi razgovor -
-
-
The abstract came from _inside_ the research.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
My dream come true!
- Još 1 odgovor
Novi razgovor -
-
It's a pity it doesn't work the other way around (abstract->paper)

- Još 1 odgovor
Novi razgovor -
-
-
And here it is in
@hen_str,@sebgehr, &@srush_nlp's GLTR: (I imagine that methods of conditioning the GLTR model on relevant information before tasking it to generate the visual footprint are coming?) http://gltr.io/dist/index.html pic.twitter.com/tWTBjUE8WR
-
You technically can do that already by adding context before the beginning of the text :) If the authors provide an API to get the probs out of the model, we could even plug it in directly (instead of GPT-2).
- Još 1 odgovor
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
