The gradient almost never points to the minimum, so descending it is an odd idea.
-
-
sometimes i think the analogical reasoning part has to be based on harmonics, where any physical system can create maps of compatible & incompatible concepts by tuning them to resonate (or be dissonant) with each other as a mechanism of annealing into a valid state
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
do you know any analogical reasoning formulation of learning?
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
The gradient is the rightish direction if you are close to the solution in a continuous space; there is presumably a class of hybrid Explore&Exploit (eg SA+SGD) approaches that can perform marginally better on some problems? 2nd order not shown to typically improve on SGD?
-
Such a wishful thinking. What makes you think there is a/1 right direction to begin with? What will you see when you get "there"? What makes you think tweaking all synaptic weights (hence a gradient vector) is "better"? Or why would a neuron care about a target?
Keskustelun loppu
Uusi keskustelu -
-
-
Formulating learning as an optimisation problem is what made ML work.
-
That was part of it, but now we need to go to the next level.
- Näytä vastaukset
Uusi keskustelu -
Lataaminen näyttää kestävän hetken.
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.