I've often wondered what serious compiler support for optimizing bignum operations would look like...
Bignum-by-default has no operations that are fail-safe; without static bounds on value magnitudes, every operation requires allocation.
-
-
even so it's what I want when I'm not doing systems programming
-
You want a scripting language then. That's the kind of tradeoff they make all the time.
-
no, scripting languages are slow and are for prototyping and glue. I want this is my general-purpose language.
-
You want to cherry-pick _this_ decision to be "easy to program, handles general cases, but slower", but not all the other similar ones.
-
no, this isn't an exhaustive list, it's just one thing I want in a fast AOT-compiled language like Swift or Java
-
Tempted to fling you at the 3-part https://landley.net/notes-2011.html#20-03-2011 … (and earlier https://landley.net/notes-2010.html#06-04-2010 …) but it's too much reading.
End of conversation
New conversation -
-
-
If your source values come from fixed-width integers or literals, you always have (pessimistic) bounds, but those are usually good enough.
-
i.e. you can implement a no-allocation mode that uses pessimistic bounds and errors at compile time if it can’t prove something satisfactory. This would be usable even in most systems domains, and much safer than what we currently do.
End of conversation
New conversation -
-
-
The situation seems similar to dynamic/fixed-size arrays—you have to either deal with overflow, deal with allocation failure, or figure out static bounds. If your language provides dynamic arrays, why not provide dynamic integers as well?
-
…and whatever optimization infrastructure you come up with for bignums could probably work on arrays too!
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.