this is the naive approach. deep learning tends to miss the random forest for the decision treeshttps://twitter.com/Magic_Flowers/status/1392171257970388993 …
-
-
wanna know how i know you’re not properly hydrating your models? pruning can be replaced by a simply map->deduce where the deduction step derives in a jacobian-gradient manner. this is basic stuff
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.