Instead of trying to be really certain that a surprising effect is true, I like the goal of trying to explain things that we already find interesting (how people learn language, or how people have social interactions). "How does it work?" is often the most interesting question
-
-
Here's one: Published paper with model + predictions, Submitted followup paper: test predictions w/4 EEG exps + revise model to bring into alignment with new data. Review: Why don't you test the new model to make sure it's correct?pic.twitter.com/Qx0jbonO1b
-
This reflects a continuing obsession with seeking “truth”. In your example we already have two rounds of learning: we learn something when the model misses the data, and also learn from how we have to modify the model to handle those data
-
Agreed. It’s about extracting information such that we learn more & more over time. We may aim for truth in the long, but cannot in anyway guarantuee it in the short run. Moreover, enforcing ‘truth’ in the short run IMO limits us to uncovering not more than simple/shallow truths.
-
Yep, that's it exactly. Moreover, the idea that I could "verify" a complex model by running an experiment or two is incredible. There's no way to tick a box that says the model is right.
-
But you are supposed to be able to disconfirm aspects of a model with experiment, right? I mean at each round one further possibility is killed off. At some point your reasonably close to “truth”, no?
-
I guess it just feels odd to me that folks would be frustrated with reviewers for “being obsessed with truth”
-
It's fine to pursue truth, just don't expect that you'll be able to reach it in a categorical sense (ever, much less in a single paper). Yes, you can disconfirm parts of a model, but they were asking for validation of the entire thing.
-
This. One caveat though is that it can be appropriate to ask for more work if a new assumption seems arbitrary, or if authors are arguing for model X, rather than “model X with this assumption that makes things work but we aren’t sure about yet and will need further testing”
- 6 more replies
New conversation -
-
-
I once submitted a paper with a necessary (new) stat method to properly evaluate the main claim. The chief editor called me on the phone and said "I like your paper, but please remove the stats and just do anovas. If the reviewers don't see anovas they'll reject the paper."
- End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

