Why would we give a computer such a high degree of autonomy, for example, autonomy to self-improve without human intervention or evaluation. This makes no sense from an engineering perspective.
The problem of building something that has enormous economic value and the problem of making it safe are unfortunately different problems; one can be solved without the other.
-
-
Yes, I agree. The question however will eventually boil down to what is meant by 'economic value'? Is 'economic value' in alignment with human values? If so, then the safety and the economic value question are entangled.
-
In the short run it is not. Humanity may have to play a longer game than our economic entities. I am currently much more worried about the possible impact of AI and ML on the financial system than about autonomous weapons, for instance.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.