We wrote about the "what, when, and why?" of t-processes: http://proceedings.mlr.press/v33/shah14.pdf . Implementation isn't much more difficult. There are nuanced trade-offs. The uncertainty and dependencies are subtly different and sometimes preferable, the noise model is less interpretable. https://twitter.com/GarridoMerchan/status/1342899907989032961 …
In 5.1.1 you discuss the LL and MSE for TP vs GP. The results on spatial and wine data were particularly noteworthy; with the TP having lower MSE scores and ridiculously high LL scores. I wasn't sure why this was the case.
-
-
It's a different model. The uncertainty representation is different. After optimizing the marginal likelihood for each, the kernel hypers will also be different. The noise model is subtly different. The paper discusses the differences.
-
Got it, thanks! That clears things up. I'll spend more time carefully re-reading the discussion to get a better understanding of the subtle differences
Koniec rozmowy
Nowa rozmowa -
Wydaje się, że ładowanie zajmuje dużo czasu.
Twitter jest przeciążony lub wystąpił chwilowy problem. Spróbuj ponownie lub sprawdź status Twittera, aby uzyskać więcej informacji.
to 
