My 1st question for the 1st AGI, "what is the best utility function for us (you and me)?" @Plinz
-
-
AGI’s need not learn from us, they may choose to do so, but they could also reject our ideas. We agree here - learning should never be forced upon someone. The last part is overly pessimistic - all ideas necessarily contain error and should be tentatively held.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Agreed with
@Plinz. His opinion is not in contradiction with Popper and@DavidDeutschOxf I guess. -
Human mind is and once for ever will be different from AGI. This two cannot communicate on “psychological” basis and cannot interact socially as well.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.