A lot of very complex systems have relatively simple transition functions. If you want to argue how to best build a system, you need different arguments than for your much stronger claim that a system can not possibly be built in a particular way.
Computer architectures have evolved since the 1960ies, and they will continue to follow whatever computational paradigm we want to pursue. Nervous systems don't have that degree of flexibility: they are constrained to local control, no address space, expensive long-range routing.
-
-
Your claim is now that non-local control, address spaces, and long-range routing are sufficient to reduce the number of neurons required. Why should we believe this?
-
If you add constraints on the way functionality can be represented and learned, you will likely lose efficiency (i.e. need more moving parts to achieve the same result). If that is true, then reducing the constraints is going to increase the potential for greater efficiency.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.