Less romantically put: your sense of good and evil depends on your particular identification with extrapersonal purposes. Conflicts about good and evil usually result from different identification.
I think that it is obviously much easier for our programs to have rational integrity than for us: that is perhaps the least of our problems.
-
-
for routines with integrity, these are your practical options right now: 1. hardcoded set of rules. inflexible, very easy to construct fail cases against; does not have "integrity". 2. a trained set of choices; flexible, adaptive, yet not infallible & like (1), likely incomplete.
-
if you wanted a machine with really good integrity, this machine would have to continously train itself with what humans of our day and age consider ethical, which, as i said before, is subject to culture, which changes continuously.
- 13 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.