It is disappointing, but not really unexpected, to see this take on the epidemic simulation code release: https://lockdownsceptics.org/code-review-of-fergusons-model/ … If we accept it at face value, we have a retired software engineer making the case that non-determinism in a simulation shows incompetence. I am all \
-
Show this thread
-
for deterministic-everything, but issues with random seeds and parallel floating point reductions are extremely widespread, and a great deal of science gets done with non-deterministic simulations. Heck, professional software engineering struggles mightily with just making \
2 replies 17 retweets 246 likesShow this thread -
completely reproducable builds. I take no position on the mechanics of the simulation or the conclusions drawn from the runs, but implicitly encouraging scientists to keep their code secret if it isn't "perfect" is damaging.
34 replies 39 retweets 527 likesShow this thread -
Replying to @ID_AA_Carmack
There's nothing wrong with Monte Carlo simulations, but you have to actually *do* Monte Carlo simulations: the program runs its model a lot of times and itself takes the average. It doesn't ask the user to do that!
1 reply 0 retweets 1 like
The deeper problem, though, is that this sort of modeling just can't be much good: it's too sensitive to its inputs, and those are too uncertain. That's why there are so many flaky models: the whole field repels people who want to really nail things down.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.