*shuts laptop forever*pic.twitter.com/3GS1JfNDSP
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Any good compiler should reject assigning a negative integer to an unsigned integer without an explicit cast.
The example is just as interesting without the conversions at assignment.
The C example is not very strange or bizarre - it is a result of sign extension. https://en.wikipedia.org/wiki/Sign_extension …
I think new languages should make promises that == is always false if the values are not actually equal (no lossy implicit conversions).
Signed/unsigned comparison errors would rule this out. (i.e: enable -Werror=sign-compare and every warning flag you can get your hands on)
Wouldn't rule out the floating point version though.
And would be painful to use. The right solution for new languages moving forward is defining == in terms of actual value equality, even when the types make it require more code to implement the comparison.
cost model suffers, for such a low level language. explicit expensiveness may be better.
For extra fun, do it with destructive "promotions" to float.
long a = 0x10000001; long b = 0x10000002; float c = 0x10000000; a==c; // true b==c; // true a==b; // false
Well, integer overflow is undefined behaviour in C, so you're not even guaranteed by the standard to get those results...
There is no overflow in the above.
A negative value is outside of uint range, so it is an overflow!
In C arithmetic (signed only), not assignment, is capable of overflow. Unsigned arithmetic is modular arithmetic.
You have at least compiler warnings : initializes unsigned with signed numbers 
You can edit the example not to have that; I just wrote it that way for brevity in a tweet.
The true, true, false game is cool and all, but can you play it by repeating the same expression? :)https://twitter.com/aidantwoods/status/954430349907460097?s=21 …
Wait, isn't assigning a negative literal to an unsigned integer undefined?
As far as I know converting signed to unsigned ints follows the (UINT_MAX - abs(negative_num)) op
That is *almost* correct; the standard defines it as repeatedly adding (UINT_MAX+1) until it fits. The only real difference in this case is the +1pic.twitter.com/XXrK4KenXn
Ah, yes, I did mean to write the +1, but there was some interference between my thoughts, my fingers, and the virtual keyboard. :-)
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.