Life under the API is a cage of silent failures and gaslighting by algorithm. There's a growing number of ways to trip over your own feet by getting interactions with enforcement/governance algorithms even slightly wrong. These then take enormous time to fix.
-
Show this thread
-
Human bureaucracies = justice delayed is justice denied Algorithmic bureaucracies = justice by janky exception handling is justice denied
1 reply 6 retweets 22 likesShow this thread -
You just can't fight it the way you would fight a human bureaucracy. The decision agents don't even have the ability to use room for judgment. Railing against the machine is useless because it is literally a machine. There is no input mode for grievances and righteous indignation
2 replies 2 retweets 12 likesShow this thread -
This mini-rant brought to you courtesy of yet another day of running into an algorithmic brick wall on some paperwork
1 reply 0 retweets 9 likesShow this thread -
The worst part is, in a mixed human machine system, illegibility of what's happening in the machine-owned steps of the interaction process leads to distrust, anger, and confused conflict among the humans, who have a tendency to blame each for balls dropped by machines
3 replies 1 retweet 12 likesShow this thread -
Hmm wonder if this can be turned into a game where say Alexa or Siri type elements are the telephones in a game of telephone. Smart mishearing and transmission instead of obviously corrupted dumb transmission.
2 replies 0 retweets 3 likesShow this thread -
I think the thing that most confuses humans in dealing with automated bureaucracies is that honor and status play no part in the process. Being (or merely acting) offended, insulted or outraged... these are moves that simply don't parse when the counterparty is an algorithm
1 reply 2 retweets 5 likesShow this thread -
I suspect you could build *much* more human-grokkable AIs if you added status/emotion metadata to inputs, and there was a simple model of status/honor/dignity. It's not that complicated. A rules engine that says "If reject_count > 10 and affect = angry, set dignity=fragile"
2 replies 1 retweet 9 likesShow this thread
A great deal of what humans consider the ineffable "human" touch to decision-making is just the sense of being seen in full, even if the counterparty can't do anything besides commiserate. We need AIs with Real People Personalities like in HHG.
-
-
Replying to @vgr
Agreed, and we should target toddlers as a goal. That would help accommodate our internal mental models and lower expectations of performance. I look at videos of my toddlers and they’re barely intelligible, but I knew what they meant.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.