https://overcast.fm/+Ic2hwsH2U/1:10:49 … #AI
“You could build a mind that thought that 51 was a prime number but otherwise had no defect of its intelligence – if you knew what you were doing” —@ESYudkowsky
Is it possible build a mind able to learn but incapable of correcting this error? (Why?)
-
-
2: ...answer, it's that there's a coherent way to tweak other transistors to contain the further consequences, not prove PA inconsistent, and have the whole thing look natural when the AGI reflects on its own code and checks the means of reflection. I don't know how to do it...
-
3: ...but I am confident this is merely my own lack of knowledge, not a barrier to the limits of what a superintelligent mind could pull off. A truly mature technology of mind, which we don't have, should be able to realize damn near any mental state you can imagine and more.
-
Let's say you can program the mental state of thinking 51 is a prime. Would that stop a mind—if it was interested in math at least at an elementary level—from recognizing the effect of your malicious code as a mental illusion—like we're capable of recognizing optical illusions?
End of conversation
New conversation -
-
-
It's neither a metaphysical force nor "particular neurones or transistors". It is, therefore, neither supernatural nor that level of analysis. This is a question of epistemology and how knowledge grows. That seems to be your mistake. :)
-
Minds - like those of ours or AGIof the future - are *software* that can create knowledge. They are not magic and nor are they hardware. An AI that is not *universal* in its capacity to create knowledge is no "mind" but a dumb automaton.
-
I don't even think you could program a calculator in such a way that it would work correctly in every case except when its user tried to do a calculation to check whether 51 is a prime.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
