2/ The question is, is the prior over functions one you'd like to use? Neural nets let us encode some interesting invariances, so often the answer is yes, I would like to use that.
-
-
Prikaži ovu nit
-
3/ But really people should do more reading and less writing. Then we wouldn't need to keep revisiting questions we'd already resolved.
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
Substitute "weight prior" for "function prior" in my thread and the points still stand. Knowing if a function prior is good requires understanding it's interplay with the task at hand. But if you can explain why any BNN prior must be good for ImageNet I'll be convinced.
-
That's the fundamental question that probabilistic modelers are interested in. One way of answering it is given in this paper https://arxiv.org/pdf/1902.05888.pdf … From which we see that translation insensitivity is important (on top of the priors that composition of processes give you).
- Još 20 drugih odgovora
Novi razgovor -
-
-
They should attend the GP summer school to gain that intuition :)
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
I think that the "field" is not as well defined and compact as it used to be; we thankfully increased diversity/interdisciplinary (ie width) but,imo, we didn't go as far wrt depth.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
Prikaži još odgovora, uključujući one koji mogu sadržavati uvredljiv sadržaj
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.