Stephan Hoyer

@shoyer

ML for science . and NumPy core dev. These are my opinions, not my employer's.

San Francisco, CA
Vrijeme pridruživanja: kolovoz 2009.

Tweetovi

Blokirali ste korisnika/cu @shoyer

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @shoyer

  1. Prikvačeni tweet
    27. ruj 2019.

    I'm happy to share a new paper, with and : "Neural reparameterization improves structural optimization" We use neural nets to parameterize inputs of a finite elements method, and differentiable through the whole thing:

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    prije 18 sati

    xarray 0.15.0 is out! with binderized examples! Full changelog: Thanks to all 32 (!) contributors to this release

    Poništi
  3. proslijedio/la je Tweet
    30. sij

    Open source is more than just a price tag. It's a community and culture based on collaboration and open exchange of ideas.

    Poništi
  4. proslijedio/la je Tweet
    29. sij

    Tensor / array library developers! Please save the date for the Tensor Developer Summit, March 19–20, . Registration opens soon.

    Poništi
  5. 27. sij

    Anyone know what's up with 's dependency tracking? Until recently we saw a long list of dependents for , but now it's entirely empty?

    Poništi
  6. proslijedio/la je Tweet
    24. sij

    Interesting analysis suggesting that the reason for the disappointing performance of many modern CNN architectures is that their depthwise convolutions are memory-bound.

    Poništi
  7. proslijedio/la je Tweet
    22. sij

    Have you ever wondered what will be the ML frameworks of the '20s? In this essay, I examine the directions AI research might take and the requirements they impose, concluding with an overview of what I believe to be two strong candidates: JAX and S4TF.

    Poništi
  8. proslijedio/la je Tweet
    22. sij

    1/5 In one my Residency projects we used CNNs to reparameterize structural optimization (w/ ). Our approach worked best on 99/116 structures. I just finished a blog post with GIFs, visualizations, and links to code + Colab.

    Prikaži ovu nit
    Poništi
  9. 20. sij

    Flax is worth taking a look at if you're interested in training neural nets in JAX. It's definitely a big step up from Stax!

    Poništi
  10. proslijedio/la je Tweet
    7. sij

    I summarized some of my thoughts on grant-based funding of open source software in a new blog post: Don't fund software that doesn't exit. cc

    Poništi
  11. 19. pro 2019.

    To give a little more context: a likely focus would be using deep learning inside numerical methods for solving large scale PDE problems, particularly for computational fluid dynamics.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    19. pro 2019.

    By the way, if you're excited about working on AI+scientific computing at Google, please reach out! I am looking to hire a PhD student intern this summer. We also have some great programs for visiting faculty & postdocs, as well as the AI residency program.

    Prikaži ovu nit
    Poništi
  13. 19. pro 2019.

    By the way, if you're excited about working on AI+scientific computing at Google, please reach out! I am looking to hire a PhD student intern this summer. We also have some great programs for visiting faculty & postdocs, as well as the AI residency program.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    19. pro 2019.
    Odgovor korisniku/ci

    Couldn’t help myself from re-running my benchmarks on TPU: (just using one chip though)

    Poništi
  15. 19. pro 2019.

    More broadly: I think the new paradigm of deep learning + auto-diff + accelerators has the potential to transform scientific computing. JAX is a decent platform for this already, and we're looking forward to making it even better!

    Prikaži ovu nit
    Poništi
  16. 19. pro 2019.

    Also note that the simulation currently isn't differentiable -- but that would be straightforward to add with the adjoint method. (You would not want to use naive back-propagation, because you would quickly run out of memory.)

    Prikaži ovu nit
    Poništi
  17. 19. pro 2019.

    Performance is currently roughly comparable to running on GPUs at the same cost point, but note that we aren't making use of the TPU's matrix multiplication core at all currently. That leaves a lot of performance on the table, e.g., for hybrid deep learning models!

    Prikaži ovu nit
    Poništi
  18. 19. pro 2019.

    JAX now supports Google Cloud TPUs! I contributed this example, solving a 2D wave equation with a spatially partitioned grid. The code is remarkably simple and all in pure Python!

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    17. pro 2019.

    A new blog post which describes 5 different ways to take advantage of the new / CMIP6 data archive in .

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    13. pro 2019.

    Wow, JAX is amazing. Thanks for introducing me . It's essentially numpy on steroids: parallel functions, GPU support, autodiff, JIT compilation, deep learning.

    Poništi
  21. proslijedio/la je Tweet
    14. pro 2019.

    Very compelling talk by on implementing molecular dynamics with Jax. I think the general strategy of upgrading our simulation to include autodiff (and probprog) will be a major theme of the next 5 years. Those points apply equally well to HEP

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·