You can now use CVXPY to produce fast @PyTorch and @TensorFlow differentiable optimization layerspic.twitter.com/EWvQOXxBb5
U tweetove putem weba ili aplikacija drugih proizvođača možete dodati podatke o lokaciji, kao što su grad ili točna lokacija. Povijest lokacija tweetova uvijek možete izbrisati. Saznajte više
You can now use CVXPY to produce fast @PyTorch and @TensorFlow differentiable optimization layerspic.twitter.com/EWvQOXxBb5
It's almost as fast as our qpth library (https://github.com/locuslab/qpth ) and is much more flexible, as you no longer have to manually restructure your problem into standard form, and we support *any* convex optimization problem expressible with CVXPYpic.twitter.com/7UQd5X8FTv
We show that this can implement many existing pieces of work with a few lines of code, including OptNet, the sparsemax, csoftmax, csparsemax
And we show a toy example on learning hard constraints in your modepic.twitter.com/bx1SlmtXYx
Our insight to make this work is to see all of the CVXPY operations as differentiable operations, which map down from the DSL to a cone program (or other solver), and then back uppic.twitter.com/TM7C7NZzYH
And we can differentiate through a cone program using the approach described in my thesis (https://github.com/bamos/thesis ) and https://arxiv.org/abs/1904.09043 -- by implicitly differentiating a residual mappic.twitter.com/LP3TH2l3MT
And most notably we can re-implement OptNet in a few lines of extremely readable code instead of the ~1000 lines of batched GPU-enabled primal-dual interior-point method and KKT system derivatives I wrote a few years ago:pic.twitter.com/ZvLAeVjS9F
Wow this is awesome!
ICYMI: @BennetMeyers Stanfurd!
Yes, it's great work! (And I retweeted it earlier as well!)
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.