I'm not 100% certain either. Thus this discussion. Very helpful
-
-
Replying to @GunnarBlohm @tyrell_turing and
I don't know all the details but maybe the LIP ramps vs steps debate would be a good case study for this. You could argue the two "sides" had motivation to find that their model fit better. So preregistration may have been appropriate.
2 replies 0 retweets 3 likes -
Replying to @neurograce @tyrell_turing and
Yes I agree! And that's a great example!
1 reply 0 retweets 2 likes -
Replying to @GunnarBlohm @neurograce and
It is a great example! But it's a very different modelling goal than the one I (and many modellers I work with) are usually interested in.
2 replies 0 retweets 0 likes -
Replying to @tyrell_turing @neurograce and
Interesting to see the difference in fields!
1 reply 0 retweets 1 like -
Replying to @GunnarBlohm @neurograce and
Indeed. To take a counter example: within machine learning I would see no need to preregister a model. Just provide the code/data required to test it - if others find that the results are easily broken by small tweaks to the initialization then they know it's not robust.
1 reply 1 retweet 2 likes -
Replying to @tyrell_turing @neurograce and
Ha! I totally agree! But I'm talking neuroscience model. Sorry if I didn't specify
1 reply 0 retweets 1 like -
Replying to @GunnarBlohm @neurograce and
Well, I know you are, but I think that some neuroscience models kinda straddle the goals of ML. Let's use a classic example: would it have helped the Olshausen & Field (1996) paper on sparse codes if they had preregistered? They were mostly trying to provide theoretical insight.
1 reply 0 retweets 2 likes -
Replying to @tyrell_turing @neurograce and
Yes I agree. Will have to think about which classes of models should or shouldn't not be pre-registered
1 reply 0 retweets 2 likes -
Replying to @GunnarBlohm @neurograce and
Yeah, this has been an interesting conversation! I'm now convinced that when the modelling goal is explaining a specific experimental result (like LIP ramps) something like prereg could be beneficial, in order to avoid a race of tweaking to get the best fit.
1 reply 0 retweets 1 like
One thing though. Code and data are not enough. You need to release the paper/spec too. I think it was implicit in your tweet further up but just making sure. I gave a talk at ICML on this, the abstract should elaborate more here:https://figshare.com/articles/Varieties_of_Reproducibility_in_Empirical_and_Computational_Domains/6818018 …
-
-
Replying to @o_guest @tyrell_turing and
Also just the title of this should clarify my views, but I've a strong feeling you already agree. http://dx.doi.org/10.1016/j.cogsys.2013.05.001 …
2 replies 0 retweets 3 likes -
Replying to @o_guest @tyrell_turing and
Yes the full meaning of open science applies!
0 replies 0 retweets 3 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.