Most of the underlying methods used in modern computational math (incl. machine learning) were invented well-before the digital computer.
-
-
Replying to @EmilyGorcenski
Runge-Kutta methods--RK4 being the bread-and-butter ODE solver--were invented in the late 1800s/early 1900s.
1 reply 1 retweet 11 likes -
Replying to @EmilyGorcenski
This is just one example. My favorite, however, is super relevant to modern computing.
1 reply 1 retweet 4 likes -
Replying to @EmilyGorcenski
No general, non-iterated algorithm exists to compute eigenvalues for a matrix over the reals or complex numbers larger than 4x4.
4 replies 4 retweets 14 likes -
Replying to @EmilyGorcenski
This basic fact is the achilles heel of a lot of modern data science. Eigenvalue methods are everywhere, and all rely on costly iteration.
1 reply 4 retweets 15 likes -
Replying to @EmilyGorcenski
The proof of this, believe it or not, is an *immediate* corollary of the Abel-Ruffini theorem, from 1824 (and more elegantly by Galois).
1 reply 2 retweets 11 likes -
Replying to @EmilyGorcenski
That's right. In 18-friggin-24 we came up with a proof that has major consequences on how we do data science in 2017.
1 reply 3 retweets 25 likes -
Replying to @EmilyGorcenski
Most modern improvement has been continuous, incremental improvement on these traditional methods. But new math is needed.
2 replies 0 retweets 7 likes -
Replying to @EmilyGorcenski
The explosion in data science hasn't been due to technical reasons so much as it has been due to cost and availability.
2 replies 0 retweets 12 likes -
nah, just window dressing on stuff Norbert Weiner came up with in the 30s.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.