It worked great! The only major issue was the formal semantics issues around extended types and intmax, as you sketched out.
Yes, exactly. __int128 has to be treated as an implementation-defined type with implementation-defined semantics, not as an extended integer type in the framework C set forth, because the latter would require redefining intmax_t.
-
-
Specifically, if int128_t (and INT128_MAX, which would be testable at preprocessor level rather than just "configure level") were defined, INTMAX_MAX>=INT128_MAX would be mandatory and intmax_t would have to have rank >= rank of int128_t.
-
What about allowing integers larger than intmax_t by not defining a corresponding INT_<size>_MAX and keeping the limit of the preprocessor to intmax_t? INT_MAX is useful because the size of an int isn't fixed. But you know the max of a uint128_t.
-
Arguably, since the standard requires INTnnn_MAX to be defined if intnnn_t is, I think it's valid for the application to use the identifier intnnn_t if INTnnn_MAX is not defined... ;-)
-
File this one away for adversarial C implementations.
-
New conversation -
-
-
We could add "Even More Extended Integer Type" to the language.pic.twitter.com/khsEXJiIlL
-
BigInt or dust.
-
An open question here would be is if it would be a truly variable-sized BigInt, or just a fixed size but absurdly large BigInt (ex: 2048 bit), or "fixed size up to N, overflows to heap". Pros/cons either way...
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.