Most of the underlying methods used in modern computational math (incl. machine learning) were invented well-before the digital computer.
-
-
-
Runge-Kutta methods--RK4 being the bread-and-butter ODE solver--were invented in the late 1800s/early 1900s.
-
This is just one example. My favorite, however, is super relevant to modern computing.
-
No general, non-iterated algorithm exists to compute eigenvalues for a matrix over the reals or complex numbers larger than 4x4.
-
This basic fact is the achilles heel of a lot of modern data science. Eigenvalue methods are everywhere, and all rely on costly iteration.
-
The proof of this, believe it or not, is an *immediate* corollary of the Abel-Ruffini theorem, from 1824 (and more elegantly by Galois).
-
That's right. In 18-friggin-24 we came up with a proof that has major consequences on how we do data science in 2017.
-
Most modern improvement has been continuous, incremental improvement on these traditional methods. But new math is needed.
-
The explosion in data science hasn't been due to technical reasons so much as it has been due to cost and availability.
-
We finally have the ability to gather, transmit, and store data at reasonable costs. The algorithms? Most are older than their practitioners
- 12 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.