Programming languages without garbage collection send us down a long path of design decisions that lead to slow compile times and fragile runtime performance cliffs.
-
Show this thread
-
First is the abandoning of covariance and contravariance, the property which guarantees sensible subtyping: that bytes are also integers, and integers are also objects, extending systematically to container types and functions.
3 replies 3 retweets 44 likesShow this thread -
Without garbage collection, offering an array of bytes where an array of integers is required requires stack-allocating an array of integers and converting each byte to an integer every time a subtype is used in place of an actual type. This is so absurd that it’s not done.
6 replies 4 retweets 38 likesShow this thread -
So now when we want to recover performance, we need to write all containers and their operations using an increasingly elaborate set of templates or generic functions, which the compiler must specialize for each type at significant cost. This is what C++ and Rust do.
4 replies 5 retweets 40 likesShow this thread -
Or we can create a very clunky wrapper like array_of_anything that is used wherever generic types are required, which manually casts and converts values among types dynamically each time it’s accessed. Java generics did this and they were awful.
3 replies 4 retweets 28 likesShow this thread -
But if we have garbage collection, we can store our large data structures once with whatever type is required, then dynamically create wrappers that reinterpret it as any subtype that’s required. We pay the cost of GC and indirect control flow for accessors but that’s all.
7 replies 4 retweets 40 likesShow this thread -
Replying to @TimSweeneyEpic
How is this different from Java etc (since we have no existing language to point to as an example)? And unless you have some impressive optimization in mind, this will end up ~10x slower than the highly specialized code C++/Rust emit, which is a high cost for "proper" subtyping.
1 reply 0 retweets 3 likes -
Replying to @wvo @TimSweeneyEpic
And agree with
@Jonathan_Blow that specialization based typing is possible without excessive compilation cost or crazy errors (at least I am attempting so in http://strlen.com/lobster/ ). It can be super expressive without needing type classes etc.1 reply 0 retweets 0 likes -
Replying to @wvo @Jonathan_Blow
If specialization is available orthogonally to subtyping, then subtyping/typeclasses work by default without code explosion (with some dynamic dispatch overhead for reinterpretation), and can be made fast where you choose.
1 reply 0 retweets 2 likes -
Haskell typeclasses work by dictionary-passing and are an example of subtyping-like behavior that works without requiring specialization to monomirphism.
1 reply 0 retweets 0 likes
More generally: a feature (like generics) shouldn’t require an optimization that carries drastic costs in compile time performance. It should work efficiently and then, optionally, monomorphism should be available as an optimization.
-
-
Replying to @TimSweeneyEpic @Jonathan_Blow
A hybrid approach would be fun to try out, but I fear it would be "worst of both worlds": still requiring a lot of specialization to make even isolated cases fast, yet not reaping the "zero (runtime) cost abstraction" benefits that end-to-end specialization guarantees.
1 reply 0 retweets 1 like -
C# uses specialised generics for value types (structs, no method table) and shared generics for class types (all ptr sized); compiler is fast. Value types are direct calls, ref types are indirect dispatch. Can specialise reference type to direct calls by wrapping in structpic.twitter.com/TmjBj6JoZs
1 reply 0 retweets 1 like - 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.