This is a cool way to build highly specific prior knowledge into neural nets: via highly specific loss functions https://arxiv.org/abs/1609.05566 Don’t just maximize likelihood of the data or predict the next frame—build all the constraints you can know into the loss and make NN satisfy.
-
-
I think it is also obvious to
@tyrell_turing that this is the level of specificity of loss function evolution could build in for brain to optimize, if it “does DL” — things like this, but specific to brain area and developmental stage, and specific animal’s ethological needsShow this thread -
When one speculates about brain as some kind of DL type system, it does not mean having architecture or loss function similar to current systems: to me, it means a) powerful optimization alg. on during organism lifetime + b) many highly specific evolved loss fxn recipes like this
Show this thread
End of conversation
New conversation -
-
-
You're saying that the brain (or evolution) is also optimising for the "usefulness" of auxiliary objectives?
-
Quite so
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.