As machine learning continues to evolve so too do its malicious uses, such as creating harder-to-detect social media bots. We experimented with using GPT-2 (the text generation tool described in this article) to generate tweets.
cc: @ZellaQuixotehttps://arstechnica.com/information-technology/2019/02/twenty-minutes-into-the-future-with-openais-deep-fake-text-ai/ …
-
Show this thread
-
First, what is GPT-2 and what does it do? Briefly, it's a text generation tool trained on a vast quantity of text linked from Reddit posts. You give it an example of text you want it to emulate, and it spits out a piece of text similar in style and content.pic.twitter.com/LW6agniqzY
1 reply 3 retweets 20 likesShow this thread -
Before trying out GPT-2 on Twitter content, we tested it on a section of the Mueller report. The results ain't bad - there're some weird spots ("the river waits for 2016?") but it correctly associated Donna Brazile with the DNC despite her not being mentioned in the input text.pic.twitter.com/RleTfiEXPu
5 replies 5 retweets 20 likesShow this thread -
Next we fed GPT-2 samples of our own tweets as input, with each tweet formatted as a separate paragraph. The machine-generated "tweets" vary in quality, with many being unintentionally hilarious (and wouldn't "Wikileaks of Schweikhauser" be a good name for a bar or restaurant?)pic.twitter.com/HH0j0A3qgb
1 reply 2 retweets 19 likesShow this thread -
We repeated the experiment with a couple of prominent accounts,
@ScottAdamsSays and@RealJamesWoods. With both the celebrity tweets and our own, we had difficulty telling what in the input tweets led to either the style or content of the "tweets" generated by the software.pic.twitter.com/wop2VM6FC6
2 replies 2 retweets 16 likesShow this thread
I'm gonna use some of those tweets. You saved me a lot of work!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.