As ASCII is a subset of both Latin-1 and UTF-8 it causes a lot of confusion I'd say.
...or it's extremely unlikely (heuristically, yes) that it was intended as anything other than UTF-8.
-
-
This is because non-ASCII UTF-8 is full of bytes which are either C1 control characters (garbage) or nonsensical pairings of printable chars
-
UTF-8 really has the _maximal possible_ properties to help you out when you have to guess, without breaking non-negotiable requirements.
-
Have you never seen a string interpreted as Latin-1 when it was utf-8 or the other way around?
-
The above 4 tweets literally just explained how the properties of UTF-8 make it so you can easily avoid that if heuristics are acceptable.
-
If you have to guess at encoding, any string that parses as UTF-8 is UTF-8. Otherwise you need more elaborate heuristics or a fixed fallback
-
FWIW this approach is used successfully in most modern IRC clients.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.