The reason most (not all) methods don't add value (over baseline) when scaled is because they're "extra training data in disguise", so their benefit vanishes in the high data regimehttps://twitter.com/ilyasut/status/1106323934209662976 …
-
-
Fair enough. It's runtime would be pretty shit though...?
-
Depends on how many predictions you want to make given data. Many interpolants (like k-nearest) have no training time! They only become slower once you make lots of predictions from the same data.
- Još 3 druga odgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.