diff --git a/README.MD b/README.MD new file mode 100644 index 0000000000000000000000000000000000000000..ec67e8d761eff8ac104e9efdbfcac0d0e2075e0d --- /dev/null +++ b/README.MD @@ -0,0 +1,6 @@ +# ROADMAP +[x] RAID{10,1,6,10} +[] RAID implementations in ZFS +[] Cache systems in ZFS +___ +[] ZFS for DB or lvm+ext4 diff --git a/ZFS-RAIDs.md b/ZFS-RAIDs.md index fa6b8774f31004b7578c726dd7e1eb68cec500d8..e922bfcc3b2bcde219b4890405fbc5b315b74838 100644 --- a/ZFS-RAIDs.md +++ b/ZFS-RAIDs.md @@ -12,7 +12,7 @@ : Replicates *logical disk* volumes into multiple fisical disks, so the same information is stored in different hard disks in real time ## RAID0 - + RAID0 splits data across a multiple-disks array. The ideal setup is equaly-sized disks since the total storage used in a RAID0 arrangement is equal to the lower storage disk space times the total amount of disks. I a array of one 120Gb disk and one 360Gb disk, the total storage available would be 240 Gb. @@ -21,8 +21,24 @@ RAID0 create stripes of data so disk operations are n-times faster, n being the Besides fastness, RAID0 also is a good system to create large amounts of data storage units with lesser disks, since all disks in the array have unique information and, having equaly-sized units, uses 100% it's fisical capability as storage. ## RAID1 - + RAID1 mirrors sets of data on **two or more** fisical disks at a time. This RAID setting, as in RAID0, also doesn't offer any *parity* and the setup replicates the size of the smallest disk on all the other disks as well. In RAID1 theres no data striping since all data is replicated multiple times. RAID1 read operations can be taken by any of the disks in the array, it's useful on read performance and reliability on data, but it is bad on write performance and total data storage capacity. + +## RAID6 + + +RAID6 uses *double parity* striped all over the array so it's allways possible to have two failed disks without losing any information. It needs to calculate 2 different systems to make paritys p and q and store them in distinct units, because of this, RAID6 has a overhead cost on write operations that is the double of a single-parity implementation as in RAID5. RAID5 doesn't have read operations performance penaltys. + +RAID6 is a very good system to use when reliability and disponibility of data is more needed than performance. + +## RAID10 + + +RAID10 is a simuntaneous implementation of RAID0 and RAID1, combining the performance boost of RAID0 with the reliability of data present in RAID1. RAID10 needs at least two 'tanks' of disks arranged in RAID1 configuration, replicating all data in all disks of each tank. Than it takes these tanks and arrange then in a RAID0 setup, so all data is striped and writen segmented in each tank. + +RAID10 offers a boost in read and write operations performance, at the same time that it permits at least 1 disk of each tank failing at once without any data loss. This systems does this by using double of logical storage in fisical storage, so it'll always use half of total raw disks capacity. + +It's a good system when total storage isn't more requested than performance and relaibility. diff --git a/RAID0.png b/imgs/RAID0.png similarity index 100% rename from RAID0.png rename to imgs/RAID0.png diff --git a/RAID1.png b/imgs/RAID1.png similarity index 100% rename from RAID1.png rename to imgs/RAID1.png diff --git a/imgs/RAID10.png b/imgs/RAID10.png new file mode 100644 index 0000000000000000000000000000000000000000..17b1308fa499cacffa200ff43c46b5afadcaaa0a Binary files /dev/null and b/imgs/RAID10.png differ diff --git a/imgs/RAID6.png b/imgs/RAID6.png new file mode 100644 index 0000000000000000000000000000000000000000..05a1cfeca5dd965d8a8b10875d36981c55b597ea Binary files /dev/null and b/imgs/RAID6.png differ