Chris Rackauckas

@ChrisRackauckas

Applied Mathematics Instructor at MIT, in . Lead developer of DifferentialEquations.jl and . .

Vrijeme pridruživanja: siječanj 2009.

Tweetovi

Blokirali ste korisnika/cu @ChrisRackauckas

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @ChrisRackauckas

  1. proslijedio/la je Tweet
    30. sij

    Pro of academia: I have friends all around the world! Con of academia: My friends are scattered all around the world.

    Poništi
  2. 28. sij
    Poništi
  3. proslijedio/la je Tweet
    22. sij

    In this paper, I describe how Julia and multiple dispatch allow us to propagate arbitrary multivariate probability distributions through any function, accelerated by SIMD or a GPU

    Prikaži ovu nit
    Poništi
  4. 16. sij

    Added an example to DiffEqFlux README of training a universal neural ODE to a loss of 4e-11 's Optim.jl and BFGS. Takes about 200 seconds on my laptop so give it a try!

    Poništi
  5. 14. sij

    For more information on the mathematical background of these data-efficient techniques, see the 18.337 course notes. For examples of how to use it in applications, see the 18.S096 course notes.

    Prikaži ovu nit
    Poništi
  6. 14. sij

    Thanks etc. for sending me down this route into thinking about ML in the context of differential equations. I think white box physics-informed models is an interesting thing that every scientist should be exploring.

    Prikaži ovu nit
    Poništi
  7. 14. sij

    In total, we utilize as much prior information about scientific models to utilize ML in an efficient manner. In the context of science, the well-known adage "a picture is worth a thousand words" might well be "a model is worth a thousand datasets."

    Prikaži ovu nit
    Poništi
  8. 14. sij

    In our full paper we describe how to accelerate models 15,000x times through parameterizations automatically derived through universal PDEs, and how 100-dimensional PDEs like Hamilton-Jacobi-Bellman can be reduced to placing neural nets in an adaptive SDE solver.

    Prikaži ovu nit
    Poništi
  9. 14. sij

    We have just released an overhaul to our adjoint methods which includes methods like stabilized checkpointing interpolating adjoints that were created specifically to handle some of the universal PDE training problems found in our paper.

    Prikaži ovu nit
    Poništi
  10. 14. sij

    Thus we spent a lot of time making sure the code works in this new performance regime of high computational cost but (possibly) small differential equations with small neural networks, lots of scalar operations, high stiffness, etc.

    Prikaži ovu nit
    Poništi
  11. 14. sij

    However, going in this direction meant that there were a lot of computational issues we had to solve. For example, while traditional neural ODEs end up usually being stable, a diffusion-advection equation is unconditionally unstable when ran backwards!

    Prikaži ovu nit
    Poništi
  12. 14. sij

    The neural networks are then interpretable since they physical forms of differential equations have meanings. Convolutional neural network representing a PDE with a stencil [1 -2 1] means that the data only has diffusion and no advection. Quadratic reaction predicts regulation.

    Prikaži ovu nit
    Poništi
  13. 14. sij

    The idea is to utilize these function approximators to parameterize missing parts of models, and then train them in the scientific context so they only learn "what you forgot to model" or what you simplified out. Tiny neural nets with maybe a 100 parameters will do.

    Prikaži ovu nit
    Poništi
  14. 14. sij

    This is a structure that we call the Universal Differential Equation: a differential equation with embedded universal approximators. Sometimes neural networks, sometimes Chebyshev polynomials, for us it really doesn't matter because they are small approximators.

    Prikaži ovu nit
    Poništi
  15. 14. sij

    Our approach builds upon the work of but identifies that the differential equations one works with does not have to be a blackbox, but instead can utilize all of the available scientific models to encode as much prior information as possible.

    Prikaži ovu nit
    Poništi
  16. 14. sij

    How To Train Interpretable Neural Networks That Accurately Extrapolate From Small Data. Today we released a new paper that showcases how to do just that using Scientific Machine Learning () techniques to encode non-data scientific information.

    Prikaži ovu nit
    Poništi
  17. 4. sij

    Check out the FiniteDiff.jl: it's the next evolution of what was known as DiffEqDiffTools, now generalized to a library for everyone to use. Fast and non-allocating on sparse and structured matrices. Supports GPUs and StaticArrays.

    Poništi
  18. proslijedio/la je Tweet
    2. sij

    Beginning of the year results for the so far: What new excite you the most for ?

    Prikaži ovu nit
    Poništi
  19. 31. pro 2019.

    Ever wondered how to make ODE solvers satisfy conservation laws? Here's an in-depth StackOverflow post on making ODE solvers energy conservative (with examples):

    Poništi
  20. 19. pro 2019.

    DifferentialEquations.jl is really close to 1k stars!!!! Help us rally?

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·