There's going to be some brief API downtime today (~20 minutes) because I don't have time for frou-frou failover to the backup server; I have to replace some hard drives and then get the hell over the Sierra Nevada in a rental Mazda full of your data before the blizzard hits.
-
-
Here 29 TB of your precious bookmarks ceding the passenger seat to a Japanese robot toiletpic.twitter.com/bn5N5rsZOd
Show this thread -
API should be down for the time it takes me to replace 8 hard drives, realize I screwed them in backwards, screw them in again, plug everything back in, and then fix some Ubuntu boot loader issue that I've never seen before and will have to research in a cold sweat. Back ASAP!pic.twitter.com/jposQmbh45
Show this thread -
Hey nerd herd, how come it's 2021 and supermicro servers take so long to boot? The actual Linux boot time off SSD is minimal, but they seem to spend many minutes just ruminating on BIOS things at each reboot. Do we (I) have to live like this or is there a fix?
Show this thread -
OK, API should be back up. I gotta go skedaddle over the Donner Pass right before the snows hit—always an excellent plan. I'll check any error reports once I get over into Nevada, the one state where it never snows.
Show this thread -
By the way if anyone has clever ideas about how to move 80TB of data more virtually, or wants an exciting unpaid internship at Pinboard ops, I'm all ears. Last time I did a full off-site backup I was chased out of California by fire, this time by ice.
Show this thread -
For people wondering why I'm making mid-pandemic road trips with big boxes of disks, the problem boils down to an oddity of progress in tech. You can now store absurd amounts of data cheaply, but it's still hard to move it around in bulk (both inside and outside the computer)
Show this thread -
So the box in the photo above has 29 TB of user data. Pinboard has a 100 Mbps connection to the outside world; if I used 100% of that capacity, I could theoretically back up about 1 TB/day to a remote undisclosed location, so it would take about a month to move this data
Show this thread -
Except in practice, writing to the hard drives is slower than even that slow network connection, because they have to be in a configuration where if some of them fail, the data is not lost. So the whole process is like drinking a swimming pool through a straw.
Show this thread -
Over the years, we've made the pool way bigger, but the straw hasn't grown much. This holds true at every level—the CPU, the storage system, the data center. So 98% of modern programming is figuring out how to get around limits on moving ginormous amounts of data quickly
Show this thread -
Many smart people spend their careers on this. Some solutions include: being really smart about figuring out only what's changed, not the whole enchilada. Or copying stuff to multiple places. Or just paying a king's ransom for the biggest straw you can get.
Show this thread -
Or if you're not so smart, you can rent a fancy Mazda (that detects stop signs!) and drive your backups and your Japanese robot toilets through the basin and range in the snow. Half the cars on the road are full of hard disks and Japanese toilets. Look around you.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.