Downtime due to DNS is the most unnecessary. Caching resolvers should serve hot stale records by default. Sites should have secondary DNS.
-
-
Replying to @FiloSottile
Running and defending a DNS server is not easy (
hugs to @Dyn), trust me I know. But sites should not go down like this. It's frustrating.1 reply 12 retweets 29 likes -
Replying to @FiloSottile
There is no reason a SERVFAIL is a better answer than a stale cached record. Secondaries are easy. DNS is essentially resiliency best-case.
6 replies 23 retweets 32 likes -
Replying to @FiloSottile
Filippo Valsorda Retweeted
Take this intuition. Now ask, "why doesn't my DNS resolver just remember the IPs that worked 1h ago?" NO GOOD REASON https://twitter.com/homakov/status/789508143130742784 …
Filippo Valsorda added,
This Tweet is unavailable.7 replies 13 retweets 41 likes -
Replying to @FiloSottile
Any website would take stale IPs over downtime. Nobody relies on fast DNS changes anyway, everything is cacheable (
DNS is bestcase) </rant>8 replies 6 retweets 24 likes
We can fix this and another problem at the same time by putting caching + DNSSEC verifying DNS proxy on the endpoint (client).
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.