Lazyweb: are there testing regimes / frameworks that require / encourage each test case to be accompanied with a (natural language) rationale? eg not just the input and expected output but WHY that output is the expected one? cc
Conversation
Behavior Driven Development (BDD) is what comes to mind for me, at least! RSpec is one example of a DSL for doing this: rspec.info
1
2
"it" seems to provide room to generalise beyond the specific test case, although I wonder how often it is used that way. (c.f. semaphoreci.com/community/tuto) cc where many of the examples simply restate the expected output)
2
Not sure if this is what you mean, but there are ways of having 'shared examples' which I definitely used in practice (back when I did Ruby for my day job): relishapp.com/rspec/rspec-co
1
RSpec is kind of a fascinating use of dynamic scoping+lazy evaluation, implemented by piggybacking on top of inheritance. Each `:let` definition is actually translated into a class member, and each new scope is a subclass of its parent scope.
1
Was kind of terrifying when I first encountered it and then learned how it worked, but is actually super handy in practice. You do need to be careful not to let it turn into a mess though. 😅
1
I think in practice stuff like passing parameters in shared examples always felt tantalizingly close to morphing into something like property based testing, if that is where you are heading with this: relishapp.com/rspec/rspec-co
1
FactoryBot is almost like arbitrary generators in property testing, although with more of a focus on understandable examples (I think?) than testing edge cases: github.com/thoughtbot/fac

