Whee, almost lost ~20% of the data on this filesystem two days before I intended to finally upgrade my offsite backup host to be able to back it up completely. Murphy is one hell of an asshole.
-
Show this thread
-
Narrowly avoid disaster with the "re-create RAID array with the same parameters, assume clean, fsck and cross fingers" approach. I had some corrupted parity from a transient glitch doing major damage to the FS.
1 reply 0 retweets 6 likesShow this thread -
I'm getting tired of RAID and traditional filesystems. Stuff doesn't scale. The backup host is going to be moving to Ceph (single-host Ceph, but hey, why not) and if that works out I might migrate my home NAS to that too.
9 replies 0 retweets 9 likesShow this thread -
Replying to @marcan42
Why would you run a single host Ceph cluster? To gain experience? IMO mdraid is more reliable and easier to manage. Good luck!
1 reply 0 retweets 0 likes -
Replying to @purpleidea
mdraid does not scale. Rebuilds take forever, there's no intelligence, write-intent bitmap is nice but coarse, filesystem integrity is a bitch, no checksums, no ability to have disparate-sized disks. I have a 20TB md-raid array now and have had too many close calls.
1 reply 0 retweets 0 likes -
Replying to @marcan42
It would be interesting to see your opinion after using Ceph for a year or so (have a blog?) in your single mode scenario. My bet is you'll be less happy, but good luck either way! (I'm assuming CephFS?) Here's an old "easy ceph setup" I did. HTH https://purpleidea.com/blog/2015/12/28/trying-out-ceph-with-oh-my-vagrant/ …
1 reply 0 retweets 1 like
FWIW I already run two Ceph clusters in production, but it will be interesting having one or two personal setups (that I can afford downtime on).
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.