PDL ABSTRACT

DiskReduce: RAID for Data-Intensive Scalable Computing

4th Petascale Data Storage Workshop held in conjunction with Supercomputing '09, November 15, 2009. Portland, Oregon. Supercedes Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-09-112, November 2009.

Bin Fan, Wittawat Tantisiriroj, Lin Xiao, Garth Gibson

School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213

Data-intensive file systems, developed for Internet services and popular in cloud computing, provide high reliability and availability by replicating data, typically three copies of everything. Alternatively high performance computing, which has comparable scale, and smaller scale enterprise storage systems get similar tolerance for multiple failures from lower overhead erasure encoding, or RAID, organizations. DiskReduce is a modification of the Hadoop distributed file system HDFS) enabling asynchronous compression of initially triplicated data down to RAID-class redundancy overheads. In addition to increasing a cluster's storage capacity as seen by its users by up to a factor of three, DiskReduce can delay encoding long enough to deliver the performance benefits of multiple data copies.

FULL PAPER: pdf

 

 

 

© 2017. Last updated 15 March, 2012