Just about the only thing that arguably could have been done better in UTF-8 would have been offsetting the base value for multibyte chars..
That's inherent not a property of UTF-8. The same applies to any character encodings you pick.
-
-
You can make the situation even worse by not even having a known common subset (ASCII) but you can't make it any better without metadata.
-
My point is that not knowing the encoding of a text string is common. Some don't even care/know that there are different encodings.
-
If a string is valid UTF-8, either it's ASCII (in which case it's valid as nearly any encoding, but doesn't tell you how to add new data)...
-
...or it's extremely unlikely (heuristically, yes) that it was intended as anything other than UTF-8.
-
This is because non-ASCII UTF-8 is full of bytes which are either C1 control characters (garbage) or nonsensical pairings of printable chars
-
UTF-8 really has the _maximal possible_ properties to help you out when you have to guess, without breaking non-negotiable requirements.
-
Have you never seen a string interpreted as Latin-1 when it was utf-8 or the other way around?
-
The above 4 tweets literally just explained how the properties of UTF-8 make it so you can easily avoid that if heuristics are acceptable.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.