ML question: I don't have much experience with boosting. But it seems like instead of reweighting instances, you could equivalently (1/2)
-
-
(This seems appealing as a meta-algorithm because then the underlying algorithm doesn't have to deal with weights, and gets to use a sample)
- 1 more reply
New conversation -
-
-
It seems that Breiman called this "arcing" (for "Adaptive Resampling and Combining"), but that term didn't catch on.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This was how I was first taught boosting. Thought it was called "boosting with roulette selection," but not seeing many refs.
-
Not sure if I'm misremembering the name or if I was exposed to boosting in a weird way. Maybe both.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.