Distance is represented with cascades, but that is unrelated. You don't need to implement gradient descent on the level of individual neurons. (Our computers also don't implement it on individual transistors.)
-
-
Replying to @Plinz
I can't disprove that gradient descent can't be be simulated by neurons. Good model for now, but I am not too comfortable with it yet.
2 replies 0 retweets 0 likes -
Replying to @IntuitMachine
How would you feel if you saw a more general but less efficient paradigm that looks more natural to individual neurons but would allow them to learn to execute a gradient descent algorithm where needed?
1 reply 0 retweets 0 likes -
Replying to @Plinz
I can't say that gradient descent is efficient. All that is known is that it works very well. However, I suspect that there is something else that is more efficient than gradient decent but works with discrete values. Like dark matter, it is out there, just never found it.
1 reply 0 retweets 1 like -
Replying to @IntuitMachine
Gradient descent is a large family of approaches and just means that you use the previously gathered information to predict the structure of the solution space, so you can converge on the solution as fast as possible.
1 reply 0 retweets 1 like -
Replying to @Plinz
How would you distinguish this definition from an evolutionary approach? Is the only difference being eager versus lazy?
1 reply 0 retweets 0 likes -
Replying to @IntuitMachine
The (naive) evolutionary approach works in the absence of a model of the solution space, at the expense of being unable to cross its discontinuities.
1 reply 0 retweets 0 likes -
Replying to @Plinz
Should that be 'able' and not 'unable'? Also 'benefit' and not 'expense'?
1 reply 0 retweets 1 like -
Replying to @IntuitMachine
Evolution requires a somewhat viable solution at every step, and derives the next one by mutating the previous one. That means that it cannot cross a chasm of unviability.
1 reply 0 retweets 0 likes -
Replying to @Plinz
Viability is a very loose metric. A mutation can be viable but be less fit than the previous generation. This is different from gradient descent where the next step is that of greater fitness.
1 reply 0 retweets 1 like
If you think about the evolutionary landscape as a mountain range, you don't always have to find a higher peak, but you also cannot cross areas that are under water. Nonviable phenotypes are under water, i.e. they don't have positive fitness and cannot reproduce.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.