In a large team or with some great OPs and analysis going on, you'll probably have something quantified. But, it's a hard problem. How do you measure what *didn't* happen?
-
-
Show this thread
-
More to the point (for me), on smaller teams, I think evidence of good tests and regression guards don't get recorded. It's disappearing code. I really wish there was a way to mark tests that stopped me from doing things, manually or otherwise.https://dispatches.artifexdeus.com/on-disappearing-code-7fa2494203aa …
Show this thread
End of conversation
New conversation -
-
-
Error rates in userland are quantifiable, hard to test it against some sort of useful control.
-
One example, Airbnb says typescript caused them to have 42% fewer errors or something, but its also a new version so its hard to separate the lessons learned that are more high level from the adoption of a typed language. I think all those measurements aren't pure science. /shrug
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.