Do you get to see the author list and affiliations? If so, how much do you think this affects your objective scoring based on the science described in the abstract alone? (like when drinking wine with bottle names covered...)
-
-
-
yes you can see both. I'm sure it affects you but variance in abstract quality is higher than I'd expect from publications since it is a small enough community, and you are assigned abstracts based on topic, I could probably guess most (corresponding) authors anyway honestly
-
so it matters but it actually does not really matter...?
-
I don't know that double-blinding would be very effective I'm sure knowing authors matter, but I would guess weakly biases. Probably nonlinear so very big biases for highest IF labs
-
I am less concerned about effectiveness and more about fairness. Imagine you writing such replies to referees of a paper of yours: we preferred not to do double-blind because variance is already high (whatever that means) & we would probably be able to tell anyway. Just trust us.
-
it can't be fair if it is ineffective. But maybe it would be effective, who knows. I certainly think it would make sense to try (and as
@TrackingActions points out, remove being able to see other scores). I suspect biggest variable is "I recognize this work already" vs not
End of conversation
New conversation -
-
-
Co-sign, as it were. This is why it is good to involve junior people in reviews at every level (manuscript review, abstract review, etc.)
-
I kind of wish they just allowed (or asked) everyone who submits an abstract to review two abstracts (and then maybe could have another set of reviewers review many more) It would be so helpful for everyone involved
-
This, but for
#SFN2019 session assignments. Don't make abstracts due six months early for "session selection". Instead do a self-sessioning based on historical topics and the occasional new one, abstracts can be due 3 months ahead, and skip platform presentations. DONE.
End of conversation
New conversation -
-
-
Please share notes/thoughts on writing better abstracts!
-
2. If the abstract is classic systems neuroscience (behavior + recodings/perturbations) make sure you clearly describe the behavior, if I can't understand the task I can't adequately evaluate the rest of the abstract.
-
To add to Jeff - 3. A lot of people are surprisingly bad at emphasizing what their results actually are, and how they are new 4. There's often too much of the wrong detail and not enough of the right detail. Now sure how to be more specific than that, but many are overwritten
End of conversation
New conversation -
-
-
Yes scores get z-scored as far as I know. At least they used to...
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
My biggest takeaway reviewing last year was that some abstracts really don’t do a good job representing the underlying science, they are so brief, no figures, etc. it feels bad but for the non clearcut cases, often I felt like I was reviewing abstract quality not science quality.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Yes they do zscore
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
You sound like you will make a good reviewer, so I hope you read mine. (Not trying to bias you lol). And share those tips on writing better abstracts!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Not sure about Cosyne, but reviewer scores for NSF grants (at least for the Grad. Research Fellowship) are z-weighted acc. to faculty. Also, 1st-year PhD students in my program do mock NIH grant reviews on each others' project proposals. Super helpful learning experience
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.