If you go off the definition of Censor: "an official who examines material that is about to be released, such as books, movies, news, and art, and suppresses any parts that are considered obscene, politically unacceptable, or a threat to security."
-
-
Replying to @NekoBlanchard @meedeeums and
If the base noun requires someone to enact the censorship willingly and the outcome is intended permanence, I do think that intention to permanently sever access to that media is required for it to be labeled as censorship.
2 replies 0 retweets 0 likes -
This Tweet is unavailable.
-
Replying to @AmazonFCBryan @meedeeums and
A life is not something that can be returned such as media, so the correlation can't be directly made. The algorithm would also likely never be so poorly conceived as to have a command to "kill" in specific.
2 replies 0 retweets 0 likes -
This Tweet is unavailable.
-
Replying to @AmazonFCBryan @meedeeums and
I think that the algorithm can't be blamed until it can, if that makes sense. If there's no way to reach the human who is supposed to be responsible for correcting it, then the algorithm they made and put faith in is at fault for not adapting properly to their designs.
1 reply 0 retweets 0 likes -
This Tweet is unavailable.
-
Replying to @AmazonFCBryan @meedeeums and
I think enough to finally cause measurable outrage, as what happened with YouTube. Correction was made early on in response to that, and again later with Self-Certification being implemented. If the problems persist in this more confined space years from now, perhaps AI is bust.
2 replies 0 retweets 0 likes -
This Tweet is unavailable.
-
Replying to @AmazonFCBryan @meedeeums and
I don't see an issue with being more proactive. I'm operating under the assumption that anyone at YT still understands what's actually happening with their algorithm at this point. If not then I agree they're being flagrantly irresponsible. The AI is already doing complex stuff.
2 replies 0 retweets 0 likes
I can't imagine trying to do the same thing off of stuff that's meant to harm or kill without somebody at least guiding it behind the scenes. I'm not sure why drones would need to be purely AI-driven. It seems unnecessarily dangerous, given the rocky history of learning AI.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.