-
-
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Edit: Thanks to Stephan Hoyer, who pointed out that my benchmark only ran on 1 TPU core, whereas a Cloud TPU has 8 available. Updated post, chart and code to reflect that using the whole TPU does speed things up. My implementation could do a better job of parallelizing though.pic.twitter.com/7OZPfA4cSe
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
So say I don’t care about running on TPUs (I know the pains of TF + TPUs all to well) what are some “killer features” of JAX over PyTorch?
-
Vmap and pmap are the biggest ones that come to mind. I’ve heard anecdotally that JAX can be much faster on some workloads (don’t quote me). I expect pytorch devs to catch up eventually though.
- Još 1 odgovor
Novi razgovor -
-
-
Wdyt about http://taichi.graphics ?
-
Taichi is amazing. It is a few years ahead of existing DL frameworks in understanding that optimizing data structures is 90% of the battle. I’m playing with it on the side.
Kraj razgovora
Novi razgovor -
-
-
( Jax gotchas link is a 404 )
-
(And typo in title.)
- Još 2 druga odgovora
Novi razgovor -
-
-
Looking at flow control with JAX, do you know if there's eventually going to be support for native Python flow control? Or will it always be necessary to do things like np.where(condition, a, b)?
-
You can use Python's native control flow (if you don't jit), lax.cond for control flow within jit, and also see jax.experimental.loops: https://jax.readthedocs.io/en/latest/jax.experimental.loops.html … As for auto-magically compiling Python control flow like Numba, I'm not sure that's really a good idea.
- Još 1 odgovor
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.