astonished that anyone knowledgeable could claim that neural nets are (obviously) an “abstraction of neural processing” when we don’t yet know how brains work. if you don’t know how Y works you can’t really speak with certainty about whether X is an abstraction of Y. Period.https://twitter.com/tyrell_turing/status/1200072223299657728 …
-
-
Good point, and I'm sure philosophically the border is blurred. Practically tho, even weirdest ANNs have to pay tribute to key limitations of bio NNs: relative uniformity of elements, low-dim parametrization of each, locality of computation. Maybe also robustness to noise.
-
So from this POV, us having this conversation proves that NNs can support thinking. That we are robust to noise, suggests that NNs can be abstracted to ANNs. Thus the real question is: what's the minimal level of abstraction that would work, and how to make it work in practice :)
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.