Maybe it would've been better if it was incompatible with ASCII , avoiding confusion about which encoding a given string has.
You can make the situation even worse by not even having a known common subset (ASCII) but you can't make it any better without metadata.
-
-
My point is that not knowing the encoding of a text string is common. Some don't even care/know that there are different encodings.
-
If a string is valid UTF-8, either it's ASCII (in which case it's valid as nearly any encoding, but doesn't tell you how to add new data)...
-
...or it's extremely unlikely (heuristically, yes) that it was intended as anything other than UTF-8.
-
This is because non-ASCII UTF-8 is full of bytes which are either C1 control characters (garbage) or nonsensical pairings of printable chars
-
UTF-8 really has the _maximal possible_ properties to help you out when you have to guess, without breaking non-negotiable requirements.
-
Have you never seen a string interpreted as Latin-1 when it was utf-8 or the other way around?
-
The above 4 tweets literally just explained how the properties of UTF-8 make it so you can easily avoid that if heuristics are acceptable.
-
If you have to guess at encoding, any string that parses as UTF-8 is UTF-8. Otherwise you need more elaborate heuristics or a fixed fallback
- 1 more reply
New conversation -
-
-
You could have an encoding that can't be mistaken for Latin-1 but still readable. "Hello World" encoded as "hELLO_wORLD" for example.
-
Um, "hELLO_wORLD" is a valid ASCII (and thus Latin-1) string. You've crossed over into the realm of heuristic guesses....
-
And completely abandoned "requirement 0" of UTF-8. I don't think this thread is productive at this point...
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.