𝘠𝘦𝘴𝘴𝘴𝘴𝘴𝘴. Excited to see if the code is usable for improving poetry generation.
-
-
-
Surely if you can do this, you can use another transformer as an adversarial module to clean up some of the artifacts (repetition) right? No reason these supervision signals have to come from a human.
- Još 7 drugih odgovora
Novi razgovor -
-
-
Such a cool update to GPT-2. What I like about this is the qualitative examples showing improvement. So much better than “we’ve beat the SOTA by 0.01%”
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Just adding "safety" to this tweet and hope that's enough to fool everyone into assuming we're up to something good.
-
Well it's in their paper too, they have been outspoken on the dangers of large language models able to imitate humans. 'Safety' is also RE: biases in the text gen. models following incompletely specified objective functions, here they phrase it in terms of rules, like "don't lie"
Kraj razgovora
Novi razgovor -
-
-
Will you be sharing any examples of this phenomenon?pic.twitter.com/EGVQRTFAPe
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Is there a fast way to try these advances online? Summarization seems to be pretty easy to test on a website
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
I can't wait for the day it can write me stories about whatever I ask it to.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Who wrote this tweet? :)
- Još 1 odgovor
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

