Sometimes (I'll call this case A) the authors even send us their code and we dig in their code and discover the results are being driven by complex things that are not in the paper/spec!
-
Show this thread
-
Other times, and typically more often (case B), we never get hold of the original codebase AND the original spec/paper doesn't have enough information to replicate the results.
1 reply 5 retweets 5 likesShow this thread -
In case A, it's a relief in some ways as we can pinpoint what drives the results and why the model works. So we can take the implementation detail that drives the results and elevate it to the model/spec level. So it goes from an "unimportant" detail to an important model aspect.
1 reply 5 retweets 4 likesShow this thread -
In case B, it's just a nightmare as often we cannot find out what is driving the original published modelling results. So in some ways that's a bad state for science to be in, basically what looked like a step forward (a useful model) was just not.
1 reply 5 retweets 4 likesShow this thread -
So we cannot rely on the original codebase to claim that a model is reproducible or not, we have to rewrite the code. The original codebase is useful in cases where a spec isn't written but ideally authors/modellers should have one clearly stated in the publication...
1 reply 5 retweets 8 likesShow this thread -
because even with the original codebase often the effort required to fish out what drives the results can be prohibitive and impossible for another party to do.
1 reply 5 retweets 6 likesShow this thread -
The spec is needed most of all — the original code of course should be released too, but understood to be limited in terms of helping evaluate the modelling account.
1 reply 6 retweets 6 likesShow this thread -
By spec here I mean the materials released with the model, mainly the journal article. If the journal article is not well-written enough to be able to replicate (re-implemental) the model, that's a bit of a bad sign.
1 reply 4 retweets 4 likesShow this thread -
I hope this has shed some light on how some modellers, of course I cannot speak for all, evaluate their models and do modelling, and how
#openscience and#reproducibility fit in to this picture. Please feel free to ask any questions!3 replies 6 retweets 12 likesShow this thread -
Do you think there could be value in a formal specification language for models in your field, in the same way that Unified Modeling Language (UML) is used for industrial software models – a level of abstraction between prose and code?
1 reply 1 retweet 1 like
I think it's complicated, but things like this have been attempted a few times, e.g., cognitive architectures would/could fit that bill.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.