@LukaJacobowitz I thought about the problems with type subtraction in Scala some more. The types A and B represent sets of properties provable on values of those types. They must have a least upper-bound, which will be the (nonempty) intersection of those sets. (1/11)
-
-
Another issue is that the type T \ S probably wouldn't do what we want anyway. A field, val t: T \ S could still quite happily, by Liskov Substitution, hold a value of type S, if that value is also a subtype of T. We couldn't prevent it without redefining Liskov. (7/11)
Show this thread -
In general, the compiler could prove that T <: (T \ S), and that (T \ S₂) <: (T \ S₁) ∀ S₁ <: S₂, and that may be useful. But reasoning about more complex types such as wildcards in variant positions, or S \ (T \ U) is less obvious. Not necessarily impossible, though. (8/11)
Show this thread -
Note that all this complexity arises because the types aren't disjoint. And that's not a distinction the compiler makes, so we can't even limit the usage of ∖ to disjoint sets of types. (9/11)
Show this thread -
So, in general, I think this isn't viable without fundamentally adding complexity to the type system. If there were a way to restrict its use to disjoint types, it might be possible, maybe even sitting "on top of" the existing type system, like exaustivity checking does. (10/11)
Show this thread -
But without that constraint, I think that having a value s conform to both S and T \ S would be confusing to say the least. (11/11)
Show this thread -
And sorry this took me so long,
@LukaJacobowitz. It was interesting stepping through it logically, so thanks for the impetus... I hadn't considered it in this much detail before.Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.