Trying to remember where I read abt a framework for experimentation where we set a threshold for acceptable # of negative experiment results to ensure teams arent too risk adverse & avoiding bold bets. Like an OKR targeting 30% negative A/B tests. Anyone know what Im thinking of?
-
Show this thread
-
The tldr was to treat experimentation as a learning exercise & make teams feel supported when they have to deliver bad news. I think it might’ve been written by someone at Slack circa 2019?
@imightbemary@shaayohn@GisselleXie@robinson_es@seanjtaylor@bjoseph@far33d@j_houg3 replies 0 retweets 7 likesShow this thread -
Replying to @catherinezh @imightbemary and
sounds like something
@ronnyk would say - maybe in a deck like this https://exp-platform.com/Documents/2017-05-17EmetricsControlledExperimentsPitfallsKohaviNR.pdf … ? His book, Trustworthy Online Controlled Experiments, has some nice sections that discuss a "crawl, walk, run" model of using data as a business that are relevant. It's worth reading.2 replies 0 retweets 2 likes -
Replying to @bjoseph @catherinezh and
I read a paper last summer about using experimentation as an OKR validation technique that I think I found because of Trustworthy Experimentation. Can't for the life of me remember what it was called or who it was by (seemingly the theme of this thread
), but it sounds similar1 reply 0 retweets 4 likes -
After an increasingly pathetic series of searches through my dropbox, I went back to Trustworthy Experiments and found the name of paper, "Measuring Metrics." Available here! https://www.exp-platform.com/Documents/2016CIKM_MeasuringMetrics.pdf …
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

