That's cool! Not sure what you refer to about the gaussian activation in the pyramidal neuron, but I suppose this is something weirdly difficult to achieve on an equivalent ANN with backprop! Actually, you know what... gimme a minute, I want to try it!
-
-
-
Some discussion here was suggesting that https://www.reddit.com/r/MachineLearning/comments/eky6m4/d_new_biologically_discovered_activation_function/?utm_source=ifttt … Well if you use the same activation backprob might do it ? Not sure tho :)
- Još 2 druga odgovora
Novi razgovor -
-
-
crucial here (as random gen in your example). [3] Of course in both examples (mine, with backprop, yours, GA) there is still the bias layer (correct?); I tried without a couple of times but it doesn't seem to converge (and I bet it's correct). [4] Without your example it would /2
-
There is bias yes and it's probably highly influential due to the way random generators deviations are set, I'm quite sure it could be eliminated with more evolution and better hyper-hyper parameters
- Još 3 druga odgovora
Novi razgovor -
-
-
It works but... caveat: [1] I treated it as a regression (maybe diff result as sigmoid in final layer?); [2] I tried it a bunch of times and pretty often converges before 100 epochs (let's say 20 times out of 30) - of course the "double" randomness in TF initialization is 1/xpic.twitter.com/c3nuWLIidX
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
have taken a lots of trial and error attempts / neataptic / hyperopt / GA (lol) to find the best params/activation function, I guess! So good work! :) 3/3
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.