Convenience and cost trumped everything, because no one understood (or believed there would be) consequences to handing over their data and habits to "free" services. Drug pusher model got them hooked, and now here we all are as a society...
-
-
home server was the future we could have without being locked in all sort of clouds where they use us as a product. imagine it with UWP technology today, etc.. an utopian concept nowadays, sadly..
4 replies 9 retweets 34 likes -
Replying to @davepermen @SwiftOnSecurity and
Let's not forget that the product, which I loved, was affected by data corruption bugs. Which convinced me to build a zfs home server still running.
1 reply 0 retweets 3 likes -
Replying to @_paperino @SwiftOnSecurity and
fixed now thanks to the proper solution they implemented, in part, inspired by it. that is now server/serverfarm wide working, and still awesome. it was a product of inspiration.
1 reply 0 retweets 2 likes -
Replying to @davepermen @_paperino and
The Erasure Encoding that was at the heart of Home Server is used by the entire industry now. If you have data in the cloud, chances are very high it's protected by that bit of research that came to market in WHS first. RAID was on life support long ago.
4 replies 6 retweets 23 likes -
Replying to @CarmenCrincoli @SwiftOnSecurity and
RAID is just a special case of Erasure Coding; some EC profiles in use are actually equivalent to RAID. The bigger win IMO is the intelligent sharding and recovery of data chunks across large clusters (or pools of disks). EC by itself is really just a fancier RAID level.
1 reply 0 retweets 2 likes -
Replying to @marcan42 @CarmenCrincoli and
Really the math behind all of this is ancient; we've just figured out better ways to apply it than the old "one disk, one filesystem" model that RAID piggybacked on.
1 reply 1 retweet 4 likes -
Replying to @marcan42 @SwiftOnSecurity and
LRC is a very special case. Having locality for recovery operations is what makes modern coding so critical to cloud ops. Reed-Solomon hit the wall years ago because it didn't solve locality, thus, speed.
1 reply 0 retweets 0 likes -
Replying to @CarmenCrincoli @SwiftOnSecurity and
Do you have a citation for this approach? I'm aware of multiple solutions for RS's issues with read amplification in failure scenarios, but other than that RS is still the baseline and good on its own (and often used as the foundation for more complex schemes).
1 reply 0 retweets 1 like -
Replying to @marcan42 @SwiftOnSecurity and
Here's a blog post. There's a full research paper I don't have a link to on my phone I can find later, from the mod 00's. https://www.microsoft.com/en-us/research/blog/better-way-store-data/ …
1 reply 1 retweet 6 likes
I'll check out the paper once you get a chance to find it. I think you may be overestimating the prevalence of LRC, though. RS is arguably still the baseline standard and other approaches to solve the problem are around (e.g. SHEC in Ceph and Google's nested coding).
-
-
Replying to @marcan42 @SwiftOnSecurity and
I'll admit I haven't looked at this much the last few years, so I'd believe that. I was under the impressions that there are derivative alternatives that solve the same problem: Rebuilding data with as little network activity as possible, because the disk sets are so distributed.
1 reply 0 retweets 0 likes -
Replying to @CarmenCrincoli @marcan42 and
Here's the best one I can find right now. Look at Cheng Huang's entire research history tho. It's basically all dedicated to this problem.https://www.microsoft.com/en-us/research/publication/rethinking-erasure-codes-cloud-file-systems-minimizing-io-recovery-degraded-reads/ …
0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.