-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Edit: Thanks to Stephan Hoyer, who pointed out that my benchmark only ran on 1 TPU core, whereas a Cloud TPU has 8 available. Updated post, chart and code to reflect that using the whole TPU does speed things up. My implementation could do a better job of parallelizing though.pic.twitter.com/7OZPfA4cSe
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
So say I don’t care about running on TPUs (I know the pains of TF + TPUs all to well) what are some “killer features” of JAX over PyTorch?
-
Vmap and pmap are the biggest ones that come to mind. I’ve heard anecdotally that JAX can be much faster on some workloads (don’t quote me). I expect pytorch devs to catch up eventually though.
- 1 more reply
New conversation -
-
-
Wdyt about http://taichi.graphics ?
-
Taichi is amazing. It is a few years ahead of existing DL frameworks in understanding that optimizing data structures is 90% of the battle. I’m playing with it on the side.
End of conversation
New conversation -
-
-
( Jax gotchas link is a 404 )
-
(And typo in title.)
- 2 more replies
New conversation -
-
-
Looking at flow control with JAX, do you know if there's eventually going to be support for native Python flow control? Or will it always be necessary to do things like np.where(condition, a, b)?
-
You can use Python's native control flow (if you don't jit), lax.cond for control flow within jit, and also see jax.experimental.loops: https://jax.readthedocs.io/en/latest/jax.experimental.loops.html … As for auto-magically compiling Python control flow like Numba, I'm not sure that's really a good idea.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.