Life under the API is a cage of silent failures and gaslighting by algorithm. There's a growing number of ways to trip over your own feet by getting interactions with enforcement/governance algorithms even slightly wrong. These then take enormous time to fix.
-
-
The worst part is, in a mixed human machine system, illegibility of what's happening in the machine-owned steps of the interaction process leads to distrust, anger, and confused conflict among the humans, who have a tendency to blame each for balls dropped by machines
Show this thread -
Hmm wonder if this can be turned into a game where say Alexa or Siri type elements are the telephones in a game of telephone. Smart mishearing and transmission instead of obviously corrupted dumb transmission.
Show this thread -
I think the thing that most confuses humans in dealing with automated bureaucracies is that honor and status play no part in the process. Being (or merely acting) offended, insulted or outraged... these are moves that simply don't parse when the counterparty is an algorithm
Show this thread -
I suspect you could build *much* more human-grokkable AIs if you added status/emotion metadata to inputs, and there was a simple model of status/honor/dignity. It's not that complicated. A rules engine that says "If reject_count > 10 and affect = angry, set dignity=fragile"
Show this thread -
A great deal of what humans consider the ineffable "human" touch to decision-making is just the sense of being seen in full, even if the counterparty can't do anything besides commiserate. We need AIs with Real People Personalities like in HHG.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.