has anyone ever researched/documented the interaction between encoding a large number of effects/dependent-types and API evolution? I'm concerned "usable" forms require lots of inference, which means lots of potential breaking changes to public APIs.
-
Show this thread
-
Or even without inference, public APIs have to encode lots of incidental facts you wouldn't normally document that dramatically constrain the implementation.
2 replies 0 retweets 4 likesShow this thread -
Example in Rust: there's arguments for encoding: allocation, nounwind, simd (and which parts), float usage, atomic usage, i/o, purity, constexpr, thread-affinity, and probably others. This is a lot of things to guarantee/care about!
4 replies 0 retweets 8 likesShow this thread -
Replying to @Gankra_
I vaguely lean towards the C answer here. Public API is what's documented. I `#[nodoc]` everything I don't want to be relied on (and in rare cases just mention in the docs "you shouldn't rely on X"). Of course you still have to consider "does this actually break code" on changes
1 reply 0 retweets 0 likes -
Replying to @sgrif
right but I'm specifically talking about a world where everyone wants you to interoperate with dozens of (potentially adhoc) effects. Every effect you incidentally satisfy that you refuse to document makes your library less useful.
1 reply 0 retweets 2 likes -
Replying to @Gankra_
Yup, this is the world I live in. I think it's easy to over-estimate how bad it is, as long as you actively consider "what realistically could break" with every change, and take the spirit of RFC 1105 to heart.
2 replies 0 retweets 2 likes
Alternate answer: Claim every change is fixing a soundness bug.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.