This is true, but only if your objective function is very narrowly defined. If you include human wellbeing, social functioning etc. as part of the systems (complex) objectives, i suspect "human in the loop" is "better" in many cases.
-
-
-
Why should AI not be better at serving any human objective function, including discovering that function, than humans? Eventually every human in the loop will make things worse, which means more people have to suffer and die than would otherwise.
- 5 more replies
New conversation -
-
-
“Giving power” does not necessarily mean we lose any. We gave a lot of power to computers, cars, planes, etc. I don’t feel the need to get this power back...
-
Yes, because we can still overrule all their decisions, the machines are our tools, not our adminstrators. The world is still very democratic: if you want to have power, just become an oligarch. This time might be over soon.
End of conversation
New conversation -
-
-
... Now then, now then, how many nice attractors for these AI to settle into? Do we worry which we bias our AI toward? :D
-
The problem is not how to bias one AI but that we will inevitably build so many of them.
End of conversation
New conversation -
-
-
Plausible (instinctively) that Alpha Go gets worse with HitL but is that documented somewhere?
-
I don't know if there is a paper on this. At the moment there is no way for the best human players to beat it. (It even obliterated 2700 years of Go history in a single day.) That should imply that making human changes to its moves will not improve it.
- 1 more reply
New conversation -
-
-
Good news is you’ll have no choice
- End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.