Well.. *All* current deep learning and machine learning is just "data in disguise": the 1-nearest neighbor algorithm converges to the best possible classifier just by adding more data. (converges to twice the Bayes error with infinite samples; https://en.m.wikipedia.org/wiki/K-nearest_neighbors_algorithm#The_1-nearest_neighbor_classifier … )
-
-
- Još 5 drugih odgovora
Novi razgovor -
-
-
"extra training data in disguise" ? can you plz elaborate.?
-
"New" algorithms introduce hardcoded prior "structure" to the solution, which baseline is capable of learning on its own, given enough data. Therefore, on large data sets, "new" loses its advantage over baseline.
Kraj razgovora
Novi razgovor -
-
-
Should we, then, introduce a "Scalability score" metric for the algorithms; kind of like what the DxOmark is for cameras.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
Tweet je nedostupan.
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.