Through the wonders of Open Science, it has been pointed out to me by @JimGrange that's there was a bug in some of the code of this paper. You can read about the issue here: https://github.com/eddjberry/precision-mixture-model#bug-fix-2019-02-23 ….
Pending an erratum, this thread will detail the effect on the results.https://twitter.com/richjallen/status/1098229635136057344 …
-
Show this thread
-
The three mixture model parameters are affected: 1. Probability of recalling the target orientation (confirmatory analysis) 2. Probability of recalling a non-target orientation (exploratory analysis) 3. Probability of guessing (exploratory analysis)
1 reply 0 retweets 1 likeShow this thread -
Target probability Old result: BF of 1.93 in favour of a small effect. Dual task cost of 0.11 (95% credible interval [0.015,0.2]). New dual task cost: BF of 1.75 in favour of a large effect. Dual task cost of 0.10 (95% credible interval [0.03, 0.17])
1 reply 0 retweets 1 likeShow this thread -
Non-target probability Old result: non-target probabilities very close to zero in both conditions. New result: Estimated different between single (M = 0.16, SD = 0.14) and dual (M = 0.14, SD = 0.12) of 0.02 [95% credible interval [-0.03, 0.07].
1 reply 0 retweets 1 likeShow this thread -
P(guessing) Old: Diff between the single (M=0.17, SD=0.17) and dual task (M=0.29, SD=0.27) conditions was 0.11 (95% credible interval [ 0.2, 0.014]). New: Diff between single (M = 0.03, SD = 0.10) and dual (M = 0.17, SD = 0.23) of 0.12 (95% credible interval [0.2, 0.03]).
1 reply 0 retweets 1 likeShow this thread -
These results are just my initial quick calculations and may change when preparing a proper reproducible erratum
1 reply 0 retweets 1 likeShow this thread -
I've recalculated the table in the paper showing how the BFs vary with effect size interval. I'll also need to fix the Shiny apppic.twitter.com/PU5VWJokIG
1 reply 0 retweets 1 likeShow this thread -
Implications for the conclusion
BF for p(target) is still indetermination between the two predictions
The reduction of p(target) under dual task still results in an increase in guessing rather than p(non-target)
Overall result more equivocal?1 reply 0 retweets 2 likesShow this thread -
Lessons to learn
I wrote this code back in 2016 when I knew a lot less about writing good code. Some advice to people in a similar situation:
Write proper tests for your code
Use more readable variable names
Learn software engineering practices2 replies 0 retweets 6 likesShow this thread
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.