thank you, specially for including the notebook!! a nice paper for the weekend :)
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
-
Both works quantize gradients to one bit. Difference : we don't bother with correction of gradient quantization error. We show theoretically you can get good convergence rates without that correction. And yes it is now in
@PyTorch
End of conversation
New conversation -
-
-
Could there be implications/benefits for privacy in a federated learning context? (If I'm understanding the work correctly)
-
Since each user needs to just send a single bit, our approach improves the privacy. More interestingly, if many of users’ devices are faulty or many users are even adversaries, it is still robust to all of them, yaayyyyy
End of conversation
New conversation -
-
-
Did something similar a year ago.https://chombabupe.quora.com/Deep-Support-Vector-Machine-Networks?ch=10&share=c25620c9&srid=TwSi …
-
Doesn't surprise me these techniques work. I did on adaptive filters in the 90s. Sign LMS adaptive filters work just fine, indeed so does sign-sign LMS LMS filtering including sign LMS has been around since the 1960 when Widrow invented the technique in Stanford
- Show replies
New conversation -
-
-
Doesn't surprise me. Sign LMS adaptive filters work just fine, indeed so does sign-sign LMS
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.