With these systems as our foundation, we may extend their capabilities–beyond that which is illegal–to content deemed “hateful”, more broadly. The learning systems will then need to be fed many examples of such “hateful” content in order to be capable of detecting it w/o aid...
-
Show this thread
-
This is because most of these “AI” systems represent enhanced versions of what’s known as “supervised learning”, wherein a target set (e.g. cat pictures) are given to algorithms which are very good at extracting shared “features” from the collection of targets...pic.twitter.com/wlvjriTWl7
1 reply 0 retweets 1 likeShow this thread -
Next, the learning system is shown examples that may or may not fit the bill (i.e. cat / not cat). It makes its predictions, which are then evaluated by the “supervisor”. In Google’s case, *this is you* when you select images containing stoplights to log into a website...pic.twitter.com/HkbN7n6JOd
1 reply 0 retweets 1 likeShow this thread -
But circling back to content censorship, the question very quickly becomes: Who decides the definitions of concepts such as “hate” used to train these learning systems? And at present, the answer is: Whoever writes the code, or manages those that do...
1 reply 1 retweet 4 likesShow this thread -
Which clearly demonstrates another vector by which ideological bias-intentional or otherwise-finds its way into the systems governing the behavior and evolution of our communication networks. Those who obsess over bias in other spheres are most likely to encode their own here...
3 replies 1 retweet 3 likesShow this thread -
In any case, these examples are just scratching the surface of what's possible, but I wanted to demonstrate clearly that: - You're not going to detect these changes from the outside, using unsophisticated approaches. - This kind of "network flow management" is all around you...
1 reply 0 retweets 2 likesShow this thread -
These are not "conspiracy theories". They are the logical conclusion of centralized technology companies applying modern network science to domains where the technology creators and managers are themselves incapable of removing their own political biases from the equation...
1 reply 0 retweets 3 likesShow this thread -
The fundamental takeaway: Until we ensure the transparency of the processes and algorithms by which our information flow is managed, we will continue to witness the emergence of tools more powerful than humanity has ever known, capable of changing thought patterns at scale...
2 replies 0 retweets 4 likesShow this thread -
Replying to @MattPirkowski
I agree with your assessment. But I don’t think transparency is the solution. Even if the algorithm could be transparent & still performant, the resulting cacophony of public opinion about its decision-making would be impossible to address in any useful way.
2 replies 0 retweets 1 like -
Replying to @levity @MattPirkowski
As difficult as it may seem in the short term, I think the only long-term solution is “voting with our feet”—if we don’t trust Twitter to “manage” our communication, we have to find or create some other platform we do trust. To spur competition around ideology & functionality.
1 reply 0 retweets 1 like
Yes, we are all lab rats in an experiment in large-scale social control. Despite this, despite our communication being manipulated by unseen forces & hidden agendas, it is still “good enough”, so we keep using it. We trade convenience against ideological purity.
-
-
Replying to @levity @MattPirkowski
That said, there is definitely still value in just repeating “NAIVE USE OF MACHINE LEARNING JUST AMPLIFIES EXISTING BIASES” over and over. :)
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
