ICYMI: PyTorch 1.10 was released last Thursday. Here are some highlights of the release.
Stay tuned for tweet threads in the next couple weeks delving deeper into these cool new features!
1/8
Conversation
nn.Module parametrization (moving from Beta to Stable) allows you to implement reparametrizations in an user-extensible manner. For example, you can apply spectral normalization or enforce that the parameter is orthogonal! See twitter.com/rasbt/status/1 for an example.
4/9
Quote Tweet
I always think that PyTorch is already so feature-rich and polished, what could they change and/or add?
A neat additions is the "parameterize" module. Below, a quick example creating a custom layer. However, the real cool use-cases are applying it to larger modules of course! twitter.com/PyTorch/status…
Show this thread
1
1
29
A new LLVM-based JIT compiler is now available for CPUs that can fuse together sequences of PyTorch ops to improve performance. While we’ve had this ability for some time on GPUs, this release brings this capability to CPUs. For certain cases this can bring massive speedups!
6/9
Some additional links:
CUDA Graphs: pytorch.org/docs/master/no
FX: pytorch.org/docs/master/fx
CPU Fuser: colab.research.google.com/drive/1xaH-L0X
NNAPI: pytorch.org/tutorials/prot
Conjugate View: pytorch-dev-podcast.simplecast.com/episodes/conju
9/9
1
1
26
Here is an example benchmark showing the improved PyTorch performance on a HPC benchmark: github.com/dionhaefner/py
3
6
35

