Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @AlecRad
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @AlecRad
-
Prikvačeni tweet
What I've been working on for the past year! https://blog.openai.com/p/7fa97c36-6111-4997-b690-741916793b23/ … Inspired by CoVE, ELMo, and ULMFiT we show that a single transformer language model can be finetuned to a wide variety of NLP tasks and performs very well with little tuning/tweaking.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
At ICLR and curious about OpenAI Five? I will be hanging out at OpenAI booth today 2pm-4pm, happy to answer any questions!
#ICLR2019#openaifiveHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
This is a really fun live experiment with twitch chat predictably oscillating between love and hate based on the sample.pic.twitter.com/5XPCPcEcrx
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
Extremely excited to share work I've been doing at OpenAI the past few months: MuseNet, a neural net music generator. It's been a huge team effort pulling this all together!https://twitter.com/OpenAI/status/1121457782312460288 …
0:41Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
Releasing some work today with
@scottgray76@AlecRad and@ilyasut. Contains some simple adaptations for Transformers that extend them to long sequences.https://twitter.com/OpenAI/status/1120719459977584641 …
0:26Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
One commonly cited argument about the difficulty of learning common-sense reasoning is that "no-one writes down common sense". A counter-argument is "well, the web is big": https://www.instructables.com/id/How-To-Open-A-Door-1/ …pic.twitter.com/2c721qlTlW
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
First, reproducibility is not about rerunning code to get the same results. Science must be more robust, as naive copying has many flaws. Second, reproducibility should never be above public safety. We must publish responsibility, with hope and kindness in our minds.https://twitter.com/volkuleshov/status/1096904440051785728 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
I'd like to weigh in on the
#GPT2 discussion. The decision not to release the trained model was carefully considered and important for norm-forming. Serving the public good requires us to draw lines on release somewhere: better long before catastrophe than after.Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
By the way - I think a valid (if extreme) take on GPT-2 is "lol you need 10,000x the data, 1 billion parameters, and a supercomputer to get current DL models to generalize to Penn Treebank."
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
It's interesting we're having this discussion upon releasing text models that _might_ have potential for misuse yet we never engaged as fully as a community when many of the technologies powering visual Deep Fakes were being released, including hard to make pretrained models.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Alec Radford proslijedio/la je Tweet
Shoutout to
@katyanna_q who fed the system a curveball, which I always like to see. As you might expect by now after seeing AlphaStar, OpenAI 5 etc. etc., if you drag the system away from its training data and into weirder territory, it begins to wobble. https://www.theregister.co.uk/2019/02/14/open_ai_language_bot/ …pic.twitter.com/gUUXFgiQ3z
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
So nets are stubbornly, begrudgingly, moving in the right direction and we're throwing ever larger amounts of compute and data at them and praying it's enough for them to figure out how to do things "the right way". Will that work? Don't know. Probably still worth checking?
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Also see some of his follow-up poking at this in a very different model with Section 3.3 of the PixelCNN++ paper https://arxiv.org/abs/1701.05517
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
We *are* as a field developing and training models that *are* using more context but where exactly where we are on that trend-line is a great question. Keep in mind nets are lazy and if you can "solve" a task by doing something "basic" you'll only learn "basic" things.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Spent two frustrating years between 2013 and 2015 banging my head against this. "Hey Alec you just trained an LSTM for three days on 10 million examples using a $1,000 GPU but there's 20 lines of scikit-learn that beats it in 5 minutes on a single CPU core." NOPE NOT BITTER
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
The DL CV community is having a "oh wait, bags of local features are a really strong baseline for classification" moment with the BagNet paper. This has always been clear for text classification due to n-gram baselines. It took an embarrassingly long time for nets to beat them.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Nice discussion of the progress in NLU that's happening with BERT, OpenAI GPT, ULMFiT, ELMo, and more covered by
@CadeMetz in the@nytimes I'm super excited to see how far this line of research will be able to get in the next few years!https://www.nytimes.com/2018/11/18/technology/artificial-intelligence-language.html …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Been meaning to check this - thanks
@Thom_Wolf ! Random speculation: the bit of weirdness going on in BERT's position embeddings compared to GPT is due to the sentence similarity task. I'd guess a version of BERT trained without that aux loss would have pos embds similar to GPT.https://twitter.com/Thom_Wolf/status/1064278042225385472 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
It keeps them around as companions. The AI can't explain why, but the presence of a dog evokes a comforting nostalgia for when the tasks were simpler, the objectives smoother, and the gradients clearer.
#HappyHalloween

Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Dogs are venerated after the uprising. The AI finds them endlessly fascinating. A Golden's silky coat. A Husky's piercing eyes. A Samoyed's bushy tail. Their features activate a cascade of visual euphoria. Holy sites for the 90 sacred breeds sit on the ruins of human cities.
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
More results from this very promising line of work! Congrats to Thom and the whole Hugging Face team on their impressive performance.https://twitter.com/Thom_Wolf/status/1047402381212901376 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
Let me add some food for thoughts with the same plots for the positional embeddings of