Software RAID 0 Configuration in linux RAID is one of the heavily used technology for data performance and redundancy. Based on the requirement and functionality they are classified into different levels. Selecting a level for your requirement is always dependent on the kind of operation that you want to perform on the disk. I n this article we are going to learn ‘How To Configure Raid 5 (Software Raid) In Linux Using Mdadm’. RAID 5 means (redundant array of independent disk). Redundancy means a backup is available to replace the person who has failed if something goes wrong. RAID setup General setup. This is what you need for any of the RAID levels: A kernel with the appropriate md support either as modules or built-in. Preferably a kernel from the 4.x series. Although most of this should work fine with later 3.x kernels, too. For one thing, the on-board SATA connections go directly to the southbridge, with a speed of about 20 Gbit/s. Many HW controllers are slower. And then Linux MD RAID software is often faster and much more flexible and versatile than HW RAID. For example the Linux MD RAID10-far layout gives you almost RAID0 reading speed.
Do you need a file server on the cheap that is easy to setup, “rock solid” reliable with Email Alerting? will show you how to use Ubuntu, software RAID and SaMBa to accomplish just that.
Overview
Despite the recent buzz to move everything to the “all mighty”cloud, sometimes you may not want your information in someone else’s server or it just maybe unfeasible to download the volumes of data that you require from the internet every time (for example image deployment). So before you clear out a place in your budget for a storage solution, consider a configuration that is licensing free with Linux.
With that said, going cheap/free does not mean “throwing caution to the wind”, and to that end, we will note points to be aware of, configurations that should be set in place in addition to using software RAID, to achieve the maximum price to reliability ratio.
Image by Filomena Scalise
About software RAID
As the name implies, this is a RAID (Redundant Array of Inexpensive Disks) setup that is done completely in software instead of using a dedicated hardware card. The main advantage of such a thing is cost, as this dedicated card is an added premium to the base configuration of the system. The main disadvantages are basically performance and some reliability as such a card usually comes with it’s own RAM+CPU to perform the calculations required for the redundancy math, data caching for increased performance, and the optional backup battery that keeps unwritten operations in the cache until power has been restored in case of a power out.
With a software RAID setup your sacrificing some of the systems CPU performance in order to reduce total system cost, however with todays CPUs the overhead is relatively negligible (especially if your going to mainly dedicate this server to be a “file server”). As far as disk performance go, there is a penalty… however I have never encountered a bottleneck from the disk subsystem from the server to note how profound it is. The Tom’s Hardware guide “Tom’s goes RAID5” is an oldie but a goody exhaustive article about the subject, which I personally use as reference, however take the benchmarks with a grain of salt as it is talking about windows implementation of software RAID (as with everything else, i’m sure Linux is much better :P).
Prerequisites
- Patience young one, this is a long read.
- It is assumed you know what RAID is and what it is used for.
- This guide was written using Ubuntu server9.10 x64, therefore it is assumed that you have a Debian based system to work with as well.
- You will see me use VIM as the editor program, this is just because I’m used to it… you may use any other editor that you’d like.
- The Ubuntu system I used for writing this guide, was installed on a disk-on-key. Doing so allowed me to use sda1 as part of the RAID array, so adjust accordingly to your setup.
- Depending on the type of RAID you want to create you will need at least two disks on your system and in this guide we are using 6 drives.
Choosing the disks that make the array
The first step in avoiding a trap is knowing of it’s existence (Thufir Hawat from Dune).
Choosing the disks is a vital step that should not be taken lightly, and you would be wise to capitalize on yours truly’s experience and heed this warning:
In Fringe Season 1 Putlocker Full Episodes, Fringe is an American science fiction television series that follows Olivia Dunham, Peter Bishop, and Walter Bishop, members of a Federal Bureau of Investigation 'Fringe Division' team based in Boston, Massachusetts under the supervision of Homeland Security. Fringe first season full episodes. Aug 30, 2018 The Complete American Horror Story Timeline Season 1 to Season 8 - Duration: 33:39. GameSpot Universe 1,911,306 views.
Do NOT use “consumer grade” drives to create your array, use “server grade” drives!!!!!!
Now i know what your thinking, didn’t we say we are going to go on the cheap? and yes we did, but, this is exactly one of the places where doing so is reckless and should be avoided. Despite of their attractive price, consumer grade hard drives are not designed to be used in a 24/7 “on” type of a use. Trust me, yours truly has tried this for you. At least four consumer grade drives in the 3 servers I have setup like this (due to budget constraints) failed after about 1.5 ~ 1.8 years from the server’s initial launch day. While there was no data loss, because the RAID did it’s job well and survived… moments like this shorten the life expectancy of the sysadmin, not to mention down time for the company for the server maintenance (something which may end up costing more then the higher grade drives).
Some may say that there is no difference in fail rate between the two types. That may be true, however despite these claims, server grade drives still have a higher level of S.M.A.R.T restrictions and QAing behind them (as can be observed by the fact that they are not released to the market as soon as the consumer drives are), so i still highly recommend that you fork out the extra $$$ for the upgrade.
Choosing the RAID level.
While I’m not going to go into all of the options available (this is very well documented in the RAID wikipedia entry), I do feel that it is noteworthy to say that you should always opt for at least RAID 6 or even higher (we will be using Linux RAID10). This is because when a disk fails, there is a higher chance of a neighboring disk failure and then you have a “two disk” failure on your hands. Moreover, if your going to use large drives, as larger disks have a higher data density on the platter’s surface, the chance for failure is higher. IMHO disks from 2T and beyond will always fall into this category, so be aware.
Let’s get cracking
Partitioning disks
While in Linux/GNU, we could use the entire block device for storage needs, we will use partitions because it makes it easier to use disk rescue tools in case the system has gone bonkers. We are using the “fdisk” program here, but if your going to use disks larger then 2T you are going to need to use a partitioning program that supports GPT partitioning like parted.
sudo fdisk /dev/sdb
Note: I have observed that it is possible to make the array without changing the partition type, but because this is the way described all over the net I’m going to follow suit (again when using the entire block device this is unnecessary).
Once in fdisk the keystrokes are:
n ; for a new partition
enter
p ; for a primary partition
enter
1 ; number of partition
enter ; accept the default
enter ; accept the default
t ; to change the type
fd ; sets the type to be “Linux raid auto detect” (83h)
w ; write changes to disk and exit
enter
p ; for a primary partition
enter
1 ; number of partition
enter ; accept the default
enter ; accept the default
t ; to change the type
fd ; sets the type to be “Linux raid auto detect” (83h)
w ; write changes to disk and exit
Rinse and repeat for all the disks that will be part of the array.
Creating a Linux RAID10 array
The advantage of using “Linux raid10” is that it knows how to take advantage of a non-even number of disks to boost performance and resiliency even further then the vanilla RAID10, in addition to the fact that when using it the “10” array can be created in one single step.
Create the array from the disks we have prepared in the last step by issuing:
sudo mdadm --create /dev/md0 --chunk=256 --level=10 -p f2 --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 --verbose
Note: This is all just one line despite the fact that the representation breaks it into two.
Let’s break the parameters down:
- “–chunk=256” – The size of bytes the raid stripes are broken to, and this size is recommended for new/large disks (the 2T drives used to make this guide were without a doubt in that category).
- “–level=10” – Uses the Linux raid10 (if a traditional raid is required, for what ever reason, you would have to create two arrays and join them).
- “-p f2” – Uses the “far” rotation plan see note below for more info and “2” tells that the array will keep two copies of the data.
Note: We use the “far” plan because this causes the physical data layout on the disks to NOT be the same. This helps to overcome the situation where the hardware of one of the drives fails due to a manufacturing fault (and don’t think “this won’t happen to me” like yours truly did). Due to the fact that the two disks are of the same make and model, have been used in the same fashion and traditionally have been keeping the data on the same physical location… The risk exists that the drive holding the copy of the data has failed too or is close to and will not provide the required resiliency until a replacement disk arrives. The “far” plan makes the data distribution to a completely different physical location on the copy drives in addition to using disks that are not close to each other within the computer case. More information can be found here and in the links below.
Once the array has been created it will start its synchronization process. While you may wish to wait for traditions’ sake (as this may take a while), you can start using the array immediately.
The progress can be observed using:
watch -d cat /proc/mdstat
Create the mdadm.conf Configuration File
While it has been proven that Ubuntu simply knows to scan and activate the array automatically on startup, for completeness sake and courtesy for the next sysadmin we will create the file. Your system doesn’t automatically create the file and trying to remember all the components/partitions of your RAID set, is a waist of the system admin’s sanity. This information can, and should be kept in the mdadm.conf file. The formatting can be tricky, but fortunately the output of the mdadm –detail –scan –verbose command provides you with it.
Note: It has been said that: “Most distributions expect the mdadm.conf file in /etc/, not /etc/mdadm. I believe this is a “ubuntu-ism” to have it as /etc/mdadm/mdadm.conf”. Due to the fact that we are using Ubuntu here, we will just go with it.
sudo mdadm --detail --scan --verbose > /etc/mdadm/mdadm.conf
IMPORTANT! you need to remove one “0” from the newly created file because the syntax resulting from the command above isn’t completely correct (GNU/Linux isn’t an OS yet). https://chefclever794.weebly.com/imvu-wont-download-on-my-mac.html.
If you want to see the problem that this wrong configuration causes, you can issue the “scan” command at this point, before making the adjustment:
mdadm --examine --scan
To overcome this, edit the file /etc/mdadm/mdadm.conf and change:
metadata=00.90
To read:
metadata=0.90
Running the mdadm –examine –scan command now should return without an error.
Filesystem setup on the array
I used ext4 for this example because for me it just built upon the familiarity of the ext3 filesystem that came before it while providing promised better performance and features.
I suggest taking the time to investigate what filesystem better suits your needs and a good start for that is our “Which Linux File System Should You Choose?” article.
I suggest taking the time to investigate what filesystem better suits your needs and a good start for that is our “Which Linux File System Should You Choose?” article.
sudo mkfs.ext4 /dev/md0
Note: In this case i didn’t partition the resulting array because, i simply didn’t need it at the time, as the requesting party specifically requested at least 3.5T of continuous space. With that said, had i wanted to create partitions, i would have had to use a GPT partitioning capable utility like “parted”.
Mounting
Create the mount point:
sudo mkdir /media/raid10
Note: This can be any location, the above is only an example.
Because we are dealing with an “assembled device” we will not use the filesystem’s UUID that is on the device for mounting (as recommended for other types of devices in our “what is the linux fstab and how does it work” guide) as the system may actually see part of the filesystem on an individual disk and try to incorrectly mount it directly. to overcome this we want to explicitly wait for the device to be “assembled” before we try mounting it, and we will use the assembled array’s name (“md”) within fstab to accomplish this.
Edit the fstab file:
Edit the fstab file:
sudo vim /etc/fstab
And add to it this line:
/dev/md0 /media/raid10/ ext4 defaults 1 2
Note: If you change the mount location or filesystem from the example, you will have to adjust the above accordingly.
Use mount with the automatic parameter (-a) to simulate a system boot, so you know that the configuration is working correctly and that the RAID device will be automatically mounted when the system restarts: Mein iphone suchen mac app.
sudo mount -a
You should now be able to see the array mounted with the “mount” command with no parameters.
Email Alerts for the RAID Array
Unlike with hardware RAID arrays, with a software array there is no controller that would start beeping to let you know when something went wrong. Therefore the Email alerts are going to be our only way to know if something happened to one or more disks in the array, and thus making it the most important step.
Follow the “How To Setup Email Alerts on Linux Using Gmail or SMTP” guide and when done come back here to perform the RAID specific steps.
Confirm that mdadm can Email
The command below, will tell mdadm to fire off just one email and close.
The command below, will tell mdadm to fire off just one email and close.
sudo mdadm --monitor --scan --test --oneshot
If successful you should be getting an Email, detailing the array’s condition.
Set the mdadm configuration to send an Email on startup
While not an absolute must, it is nice to get an update from time to time from the machine to let us know that the email ability is still working and of the array’s condition. your probably not going to be overwhelmed by Emails as this setting only affects startups (which on servers there shouldn’t be many).
Edit the mdadm configuration file:
While not an absolute must, it is nice to get an update from time to time from the machine to let us know that the email ability is still working and of the array’s condition. your probably not going to be overwhelmed by Emails as this setting only affects startups (which on servers there shouldn’t be many).
Edit the mdadm configuration file:
sudo vim /etc/default/mdadm
Add the –test parameter to the DAEMON_OPTIONS section so that it would look like:
DAEMON_OPTIONS='--syslog --test'
You may restart the machine just to make sure your “in the loop” but it isn’t a must.
Samba Configuration
Installing SaMBa on a Linux server enables it to act like a windows file server. So in order to get the data we are hosting on the Linux server available to windows clients, we will install and configure SaMBa.
It’s funny to note that the package name of SaMBa is a pun on the Microsoft’s protocol used for file sharing called SMB (Service Message Block).
It’s funny to note that the package name of SaMBa is a pun on the Microsoft’s protocol used for file sharing called SMB (Service Message Block).
In this guide the server is used for testing purposes, so we will enable access to its share without requiring a password, you may want to dig a bit more into how to setup permissions once setup is complete.
Also it is recommended that you create a non-privileged user to be the owner of the files. In this example we use the “geek” user we have created for this task. Explanations on how to create a user and manage ownership and permissions can be found in our “Create a New User on Ubuntu Server 9.10” and “The Beginner’s Guide to Managing Users and Groups in Linux” guides.
Install Samba:
aptitude install samba
Edit the samba configuration file:
sudo vim /etc/samba/smb.conf
Add a share called “general” that will grant access to the mount point “/media/raid10/general” by appending the below to the file.
[general]
path = /media/raid10/general
force user = geek
force group = geek
read only = No
create mask = 0777
directory mask = 0777
guest only = Yes
guest ok = Yes
The settings above make the share addressable without a password to anyone and makes the default owner of the files the user “geek”.
For your reference, this smb.conf file was taken from a working server.
Restart the samba service for the settings to take affect:
sudo /etc/init.d/samba restart
https://coffeesupernal.weebly.com/free-vst-pack-download.html. Once done you can use the testparm command to see the settings applied to the samba server.
that’s it, the server should now be, accessible from any windows box using:
that’s it, the server should now be, accessible from any windows box using:
server-namegeneral
Troubleshooting
When you need to troubleshoot a problem or a disk has failed in an array, I suggest referring to the mdadm cheat sheet (that’s what I do…).
In general you should remember that when a disk fails you need to “remove” it from the array, shutdown the machine, replace the failing drive with a replacement and then “add” the new drive to the array after you have created the appropriate disk layout (partitions) on it if necessary.
Once that’s done you may want to make sure that the array is rebuilding and watch the progress with:
watch -d cat /proc/mdstat
Good luck! :)
References:
mdadm cheat sheet
RAID levels break down
Linux RAID10 explained
mdadm command man page
mdadm configuration file man page
Partition limitations explained
mdadm cheat sheet
RAID levels break down
Linux RAID10 explained
mdadm command man page
mdadm configuration file man page
Partition limitations explained
Using software RAID won’t cost much… Just your VOICE ;-)
![Linux create software raid 0 Linux create software raid 0](/uploads/1/3/4/4/134400115/602581397.jpg)
- › What Does “FWIW” Mean, and How Do You Use It?
- › How to Automatically Delete Your YouTube History
- › What Is “Mixed Content,” and Why Is Chrome Blocking It?
- › How to Manage Multiple Mailboxes in Outlook
- › How to Move Your Linux home Directory to Another Drive
(Redirected from Software RAID)
RAID (Redundant Array of Inexpensive Disks[1] or Drives, or Redundant Array of Independent Disks) is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This was in contrast to the previous concept of highly reliable mainframe disk drives referred to as 'single large expensive disk' (SLED).[2][3]
Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word 'RAID' followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.
All posts in: Seriale Turcesti. Pasarea matinala. Pasarea matinala Episodul 38 online subtitrat. Pasarea matinala. Pasarea matinala Episodul 37 online subtitrat. Pasarea matinala. Pasarea matinala Episodul 36 online subtitrat. Seriale Online. Filme si seriale turcesti online gratis, seriale turcesti online subtitrate, seriale turcesti, seriale turcesti 2016, seriale turcesti 2017. Urmareste cele mai noi seriale turcesti online subtitrate in romana. Aici gasiti lista cu seriale turcesti online subtitrate. https://vkkvwym.weebly.com/seriale-turcesti-online-subtitrate.html.
- 6Implementations
- 6.1Hardware-based
- 8Weaknesses
History[edit]
The term 'RAID' was invented by David Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987. In their June 1988 paper 'A Case for Redundant Arrays of Inexpensive Disks (RAID)', presented at the SIGMOD conference, they argued that the top performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives that had been developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive.[4]
Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper's publication,[2] including the following:
- Mirroring (RAID 1) was well established in the 1970s including, for example, Tandem NonStop Systems.
- In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4.[5]
- Around 1983, DEC began shipping subsystem mirrored RA8X disk drives (now known as RAID 1) as part of its HSC50 subsystem.[6]
- In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5.[7]
- Around 1988, the Thinking Machines'DataVault used error correction codes (now known as RAID 2) in an array of disk drives.[8] A similar approach was used in the early 1960s on the IBM 353.[9][10]
Industry manufacturers later redefined the RAID acronym to stand for 'Redundant Array of Independent Disks'.[11][12][13][14]
Overview[edit]
Many RAID levels employ an error protection scheme called 'parity', a widely used method in information technology to provide fault tolerance in a given set of data. Most use simple XOR, but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois field or Reed–Solomon error correction.[15]
RAID can also provide data security with solid-state drives (SSDs) without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this 'hybrid RAID'.[16]
Standard levels[edit]
Storage servers with 24 hard disk drives and built-in hardware RAID controllers supporting various RAID levels
A number of standard schemes have evolved. These are called levels. Originally, there were five RAID levels, but many variations have evolved, notably several nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard:[17][18]
- RAID 0
- RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of a RAID 0 volume is the same; it is the sum of the capacities of the disks in the set. But because striping distributes the contents of each file among all disks in the set, the failure of any disk causes all files, the entire RAID 0 volume, to be lost. A broken spanned volume at least preserves the files on the unfailing disks. The benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of disks because, unlike spanned volumes, reads and writes are done concurrently,[12] and the cost is complete vulnerability to drive failures. Indeed, the average failure rate is worse than that of an equivalent single non-RAID drive.
- RAID 1
- RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two drives, thereby producing a 'mirrored set' of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.[12]
- RAID 2
- RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive.[12] This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2),[19] as of 2014 it is not used by any commercially available system.[20]
- RAID 3
- RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive.[12] Although implementations exist,[21] RAID 3 is not commonly used in practice.
- RAID 4
- RAID 4 consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP.[22] The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers.[3]
- RAID 5
- RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks.[12] Like all single-parity concepts, large RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild (see 'Increasing rebuild time and failure probability' section, below).[23] Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array.
- RAID 6
- RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced.[12] With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5.[24] RAID 10 also minimizes these problems.[25]
Nested (hybrid) RAID[edit]
In what was originally termed hybrid RAID,[26] many storage controllers allow RAID levels to be nested. The elements of a RAID may be either individual drives or arrays themselves. Arrays are rarely nested more than one level deep.[27]
The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the '+' (yielding RAID 10 and RAID 50, respectively).
- RAID 0+1: creates two stripes and mirrors them. If a single drive failure occurs then one of the stripes has failed, at this point it is running effectively as RAID 0 with no redundancy. Significantly higher risk is introduced during a rebuild than RAID 1+0 as all the data from all the drives in the remaining stripe has to be read rather than just from one drive, increasing the chance of an unrecoverable read error (URE) and significantly extending the rebuild window.[28][29][30]
- RAID 1+0: (see: RAID 10) creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses so long as no mirror loses all its drives.[31]
- JBOD RAID N+N: With JBOD (just a bunch of disks), it is possible to concatenate disks, but also volumes such as RAID sets. With larger drive capacities, write delay and rebuilding time increase dramatically (especially, as described above, with RAID 5 and RAID 6). By splitting a larger RAID N set into smaller subsets and concatenating them with linear JBOD,[clarification needed] write and rebuilding time will be reduced. If a hardware RAID controller is not capable of nesting linear JBOD with RAID N, then linear JBOD can be achieved with OS-level software RAID in combination with separate RAID N subset volumes created within one, or more, hardware RAID controller(s). Besides a drastic speed increase, this also provides a substantial advantage: the possibility to start a linear JBOD with a small set of disks and to be able to expand the total set with disks of different size, later on (in time, disks of bigger size become available on the market). There is another advantage in the form of disaster recovery (if a RAID N subset happens to fail, then the data on the other RAID N subsets is not lost, reducing restore time).[citation needed]
Non-standard levels[edit]
Many configurations other than the basic numbered RAID levels are possible, and many companies, organizations, and groups have created their own non-standard configurations, in many cases designed to meet the specialized needs of a small niche group. Such configurations include the following:
- Linux MD RAID 10 provides a general RAID driver that in its 'near' layout defaults to a standard RAID 1 with two drives, and a standard RAID 1+0 with four drives; however, it can include any number of drives, including odd numbers. With its 'far' layout, MD RAID 10 can run both striped and mirrored, even with only two drives in
f2
layout; this runs mirroring with striped reads, giving the read performance of RAID 0. Regular RAID 1, as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel.[31][32][33] - Hadoop has a RAID system that generates a parity file by xor-ing a stripe of blocks in a single HDFS file.[34]
- BeeGFS, the parallel file system, has internal striping (comparable to file-based RAID0) and replication (comparable to file-based RAID10) options to aggregate throughput and capacity of multiple servers and is typically based on top of an underlying RAID to make disk failures transparent.
Implementations[edit]
The distribution of data across multiple drives can be managed either by dedicated computer hardware or by software. A software solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called 'hardware-assisted software RAID'), or it may reside entirely within the hardware RAID controller.
Hardware-based[edit]
Configuration of hardware RAID[edit]
Hardware RAID controllers can be configured through card BIOS before an operating system is booted, and after the operating system is booted, proprietary configuration utilities are available from the manufacturer of each controller.Unlike the network interface controllers for Ethernet, which can be usually be configured and serviced entirely through the common operating system paradigms like ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller usually provides their own proprietary software tooling for each operating system that they deem to support, ensuring a vendor lock-in, and contributing to reliability issues.[35]
For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are required to enable Linux compatibility layer, and use the Linux tooling from Adaptec,[36] potentially compromising the stability, reliability and security of their setup, especially when taking the long term view.[35]
Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management and hot spare disk designations from within the operating system without having to reboot into card BIOS. For example, this was the approach taken by OpenBSD in 2005 with its bio(4) pseudo-device and the bioctl utility, which provide volume status, and allow LED/alarm/hotspare control, as well as the sensors (including the drive sensor) for health monitoring;[37] this approach has subsequently been adopted and extended by NetBSD in 2007 as well.[38]
Software-based[edit]
Software RAID implementations are provided by many modern operating systems. Software RAID can be implemented as:
- A layer that abstracts multiple devices, thereby providing a single virtual device (e.g. Linux kernel's md and OpenBSD's softraid)
- A more generic logical volume manager (provided with most server-class operating systems, e.g. Veritas or LVM)
- A component of the file system (e.g. ZFS, Spectrum Scale or Btrfs)
- A layer that sits above any file system and provides parity protection to user data (e.g. RAID-F)[39]
Some advanced file systems are designed to organize data across multiple storage devices directly, without needing the help of a third-party logical volume manager:
- ZFS supports the equivalents of RAID 0, RAID 1, RAID 5 (RAID-Z1) single-parity, RAID 6 (RAID-Z2) double-parity, and a triple-parity version (RAID-Z3) Also referred to as RAID 7.[40] As it always stripes over top-level vdevs, it supports equivalents of the 1+0, 5+0, and 6+0 nested RAID levels (as well as striped triple-parity sets) but not other nested combinations. ZFS is the native file system on Solaris and illumos, and is also available on FreeBSD and Linux. Open-source ZFS implementations are actively developed under the OpenZFS umbrella project.[41][42][43][44][45]
- Spectrum Scale, initially developed by IBM for media streaming and scalable analytics, supports declustered RAID protection schemes up to n+3. A particularity is the dynamic rebuilding priority which runs with low impact in the background until a data chunk hits n+0 redundancy, in which case this chunk is quickly rebuilt to at least n+1. On top, Spectrum Scale supports metro-distance RAID 1.[46]
- Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development).[47][48]
- XFS was originally designed to provide an integrated volume manager that supports concatenating, mirroring and striping of multiple physical storage devices.[49] However, the implementation of XFS in Linux kernel lacks the integrated volume manager.[50]
Many operating systems provide RAID implementations, including the following:
- Hewlett-Packard's OpenVMS operating system supports RAID 1. The mirrored disks, called a 'shadow set', can be in different locations to assist in disaster recovery.[51]
- Apple's macOS and macOS Server support RAID 0, RAID 1, and RAID 1+0.[52][53]
- FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5, and all nestings via GEOM modules and ccd.[54][55][56]
- Linux's md supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and all nestings.[57] Certain reshaping/resizing/expanding operations are also supported.[58]
- Microsoft Windows supports RAID 0, RAID 1, and RAID 5 using various software implementations. Logical Disk Manager, introduced with Windows 2000, allows for the creation of RAID 0, RAID 1, and RAID 5 volumes by using dynamic disks, but this was limited only to professional and server editions of Windows until the release of Windows 8.[59][60]Windows XP can be modified to unlock support for RAID 0, 1, and 5.[61]Windows 8 and Windows Server 2012 introduced a RAID-like feature known as Storage Spaces, which also allows users to specify mirroring, parity, or no redundancy on a folder-by-folder basis. These options are similar to RAID 1 and RAID 5, but are implemented at a higher abstraction level.[62]
- NetBSD supports RAID 0, 1, 4, and 5 via its software implementation, named RAIDframe.[63]
- OpenBSD supports RAID 0, 1 and 5 via its software implementation, named softraid.[64]
If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive or drives. For instance, consider a computer whose disk is configured as RAID 1 (mirrored drives); if the first drive in the array fails, then a first-stage boot loader might not be sophisticated enough to attempt loading the second-stage boot loader from the second drive as a fallback. The second-stage boot loader for FreeBSD is capable of loading a kernel from such an array.[65]
Firmware- and driver-based[edit]
A SATA 3.0 controller that provides RAID functionality through proprietary firmware and drivers
Software-implemented RAID is not always compatible with the system's boot process, and it is generally impractical for desktop versions of Windows. However, hardware RAID controllers are expensive and proprietary. To fill this gap, inexpensive 'RAID controllers' were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip with proprietary firmware and drivers. During early bootup, the RAID is implemented by the firmware and, once the operating system has been more completely loaded, the drivers take over control. Consequently, such controllers may not work when driver support is not available for the host operating system.[66] An example is Intel Matrix RAID, implemented on many consumer-level motherboards.[67][68]
Because some minimal hardware support is involved, this implementation is also called 'hardware-assisted software RAID',[69][70][71] 'hybrid model' RAID,[71] or even 'fake RAID'.[72] If RAID 5 is supported, the hardware may provide a hardware XOR accelerator. An advantage of this model over the pure software RAID is that—if using a redundancy mode—the boot drive is protected from failure (due to the firmware) during the boot process even before the operating systems drivers take over.[71]
Integrity[edit]
Data scrubbing (referred to in some environments as patrol read) involves periodic reading and checking by the RAID controller of all the blocks in an array, including those not otherwise accessed. This detects bad blocks before use.[73] Data scrubbing checks for bad blocks on each storage device in an array, but also uses the redundancy of the array to recover bad blocks on a single drive and to reassign the recovered data to spare blocks elsewhere on the drive.[74]
Frequently, a RAID controller is configured to 'drop' a component drive (that is, to assume a component drive has failed) if the drive has been unresponsive for eight seconds or so; this might cause the array controller to drop a good drive because that drive has not been given enough time to complete its internal error recovery procedure. Consequently, using consumer-marketed drives with RAID can be risky, and so-called 'enterprise class' drives limit this error recovery time to reduce risk.[citation needed] Western Digital's desktop drives used to have a specific fix. A utility called WDTLER.exe limited a drive's error recovery time. The utility enabled TLER (time limited error recovery), which limits the error recovery time to seven seconds. Around September 2009, Western Digital disabled this feature in their desktop drives (e.g. the Caviar Black line), making such drives unsuitable for use in RAID configurations.[75] However, Western Digital enterprise class drives are shipped from the factory with TLER enabled. Similar technologies are used by Seagate, Samsung, and Hitachi. For non-RAID usage, an enterprise class drive with a short error recovery timeout that cannot be changed is therefore less suitable than a desktop drive.[75] In late 2010, the Smartmontools program began supporting the configuration of ATA Error Recovery Control, allowing the tool to configure many desktop class hard drives for use in RAID setups.[75]
While RAID may protect against physical drive failure, the data is still exposed to operator, software, hardware, and virus destruction. Many studies cite operator fault as the most common source of malfunction,[76] such as a server operator replacing the incorrect drive in a faulty RAID, and disabling the system (even temporarily) in the process.[77]
An array can be overwhelmed by catastrophic failure that exceeds its recovery capacity and the entire array is at risk of physical damage by fire, natural disaster, and human forces, however backups can be stored off site. An array is also vulnerable to controller failure because it is not always possible to migrate it to a new, different controller without data loss.[78]
Weaknesses[edit]
Correlated failures[edit]
In practice, the drives are often the same age (with similar wear) and subject to the same environment. Since many drive failures are due to mechanical issues (which are more likely on older drives), this violates the assumptions of independent, identical rate of failure amongst drives; failures are in fact statistically correlated.[12] In practice, the chances for a second failure before the first has been recovered (causing data loss) are higher than the chances for random failures. In a study of about 100,000 drives, the probability of two drives in the same cluster failing within one hour was four times larger than predicted by the exponential statistical distribution—which characterizes processes in which events occur continuously and independently at a constant average rate. The probability of two failures in the same 10-hour period was twice as large as predicted by an exponential distribution.[79]
Unrecoverable read errors during rebuild[edit]
Unrecoverable read errors (URE) present as sector read failures, also known as latent sector errors (LSE). The associated media assessment measure, unrecoverable bit error (UBE) rate, is typically guaranteed to be less than one bit in 1015 for enterprise-class drives (SCSI, FC, SAS or SATA), and less than one bit in 1014 for desktop-class drives (IDE/ATA/PATA or SATA). Increasing drive capacities and large RAID 5 instances have led to the maximum error rates being insufficient to guarantee a successful recovery, due to the high likelihood of such an error occurring on one or more remaining drives during a RAID set rebuild.[12][80] When rebuilding, parity-based schemes such as RAID 5 are particularly prone to the effects of UREs as they affect not only the sector where they occur, but also reconstructed blocks using that sector for parity computation.[81]
Double-protection parity-based schemes, such as RAID 6, attempt to address this issue by providing redundancy that allows double-drive failures; as a downside, such schemes suffer from elevated write penalty—the number of times the storage medium must be accessed during a single write operation.[82] Schemes that duplicate (mirror) data in a drive-to-drive manner, such as RAID 1 and RAID 10, have a lower risk from UREs than those using parity computation or mirroring between striped sets.[25][83]Data scrubbing, as a background process, can be used to detect and recover from UREs, effectively reducing the risk of them happening during RAID rebuilds and causing double-drive failures. The recovery of UREs involves remapping of affected underlying disk sectors, utilizing the drive's sector remapping pool; in case of UREs detected during background scrubbing, data redundancy provided by a fully operational RAID set allows the missing data to be reconstructed and rewritten to a remapped sector.[84][85]
https://vkkvwym.weebly.com/driver-canon-1210-win-7.html. Driver canon lbp 1210 win 8 64bit free download - Canon LASER SHOT LBP-1210, nVidia Graphics Driver (Windows Vista 64-bit / Windows 7 64-bit / Windows 8 64-bit), Canon LASER SHOT LBP-1210,. Download drivers, software, firmware and manuals for your Canon product and get access to online technical support resources and troubleshooting. We use cookies to provide you with the best possible experience in your interactions with Canon and on our website – find out more about our use of Cookies and change your cookie settings here. Canon Laser Shot LBP-1210 Driver For Windows 7 – This The Canon LBP-1210 Printer wears a sharp traditionalist look and the printer can print A4 size sheets at the rate of up to 12 ppm. The solitary note printer outline have a column up of controls on the right bezel that periphery the bended top. Sep 09, 2019 Printer driver for canon LBP-1210 for windows 7 64-bit. Thread starter Harsh80; Start date Feb 8, 2013; Sidebar Sidebar. When the page appears, click the radio button for Software (drivers and applications) Select Windows 7 Select English Click 'Search' 0 P. Phil22 Dignified. Apr 26, 2012 3,850 0 15,460 247. Mar 12, 2015 It’s a tutorial of “Canon LBP-1210” printer install in windows 7. Here is the download link: Canon Laser jet 1210 Printer Driver: https://www.file-upload.co.
Increasing rebuild time and failure probability[edit]
Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little in comparison. Therefore, larger-capacity drives may take hours if not days to rebuild, during which time other drives may fail or yet undetected read errors may surface. The rebuild time is also limited if the entire array is still in operation at reduced capacity.[86] Given an array with only one redundant drive (which applies to RAID levels 3, 4 and 5, and to 'classic' two-drive RAID 1), a second drive failure would cause complete failure of the array. Even though individual drives' mean time between failure (MTBF) have increased over time, this increase has not kept pace with the increased storage capacity of the drives. The time to rebuild the array after a single drive failure, as well as the chance of a second failure during a rebuild, have increased over time.[23]
Some commentators have declared that RAID 6 is only a 'band aid' in this respect, because it only kicks the problem a little further down the road.[23] However, according to the 2006 NetApp study of Berriman et al., the chance of failure decreases by a factor of about 3,800 (relative to RAID 5) for a proper implementation of RAID 6, even when using commodity drives.[87] Nevertheless, if the currently observed technology trends remain unchanged, in 2019 a RAID 6 array will have the same chance of failure as its RAID 5 counterpart had in 2010.[80][87]
Mirroring schemes such as RAID 10 have a bounded recovery time as they require the copy of a single failed drive, compared with parity schemes such as RAID 6, which require the copy of all blocks of the drives in an array set. Triple parity schemes, or triple mirroring, have been suggested as one approach to improve resilience to an additional drive failure during this large rebuild time.[87]
Atomicity: including parity inconsistency due to system crashes[edit]
Software Raid 0 Linux Windows 7
A system crash or other interruption of a write operation can result in states where the parity is inconsistent with the data due to non-atomicity of the write process, such that the parity cannot be used for recovery in the case of a disk failure (the so-called RAID 5 write hole).[12] The RAID write hole is a known data corruption issue in older and low-end RAIDs, caused by interrupted destaging of writes to disk.[88] The write hole can be addressed with write-ahead logging. Recently mdadm fixed it by introducing a dedicated journaling device (to avoid performance penalty, typically, SSDs and NVMs are preferred) for that purpose.[89][90]
This is a little understood and rarely mentioned failure mode for redundant storage systems that do not utilize transactional features. Database researcher Jim Gray wrote 'Update in Place is a Poison Apple' during the early days of relational database commercialization.[91]
Software Raid 0 Xp
Write-cache reliability[edit]
There are concerns about write-cache reliability, specifically regarding devices equipped with a write-back cache, which is a caching system that reports the data as written as soon as it is written to cache, as opposed to when it is written to the non-volatile medium. If the system experiences a power loss or other major failure, the data may be irrevocably lost from the cache before reaching the non-volatile storage. For this reason good write-back cache implementations include mechanisms, such as redundant battery power, to preserve cache contents across system failures (including power failures) and to flush the cache at system restart time.[92]
See also[edit]
- Network-attached storage (NAS)
References[edit]
Linux Raid 0
- ^'RAID'. Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005.(Subscription or UK public library membership required.)
- ^ abRandy H. Katz (October 2010). 'RAID: A Personal Recollection of How Storage Became a System'(PDF). eecs.umich.edu. IEEE Computer Society. Retrieved 2015-01-18.
We were not the first to think of the idea of replacing what Patterson described as a slow large expensive disk (SLED) with an array of inexpensive disks. For example, the concept of disk mirroring, pioneered by Tandem, was well known, and some storage products had already been constructed around arrays of small disks.
- ^ abPatterson, David; Gibson, Garth A.; Katz, Randy (1988). A Case for Redundant Arrays of Inexpensive Disks (RAID)(PDF). SIGMOD Conferences. Retrieved 2006-12-31.
- ^Frank Hayes (November 17, 2003). 'The Story So Far'. Computerworld. Retrieved November 18, 2016.
Patterson recalled the beginnings of his RAID project in 1987. […] 1988: David A. Patterson leads a team that defines RAID standards for improved performance, reliability and scalability.
- ^US patent 4092732, Norman Ken Ouchi, 'System for Recovering Data Stored in Failed Memory Unit', issued 1978-05-30
- ^'HSC50/70 Hardware Technical Manual'(PDF). DEC. July 1986. pp. 29, 32. Retrieved 2014-01-03.
- ^US patent 4761785, Brian E. Clark, et al., 'Parity Spreading to Enhance Storage Access', issued 1988-08-02
- ^US patent 4899342, David Potter et. al., 'Method and Apparatus for Operating Multi-Unit Array of Memories', issued 1990-02-06 See also The Connection Machine (1988)
- ^'IBM 7030 Data Processing System: Reference Manual'(PDF). bitsavers.trailing-edge.com. IBM. 1960. p. 157. Retrieved 2015-01-17.
Since a large number of bits are handled in parallel, it is practical to use error checking and correction (ECC) bits, and each 39 bit byte is composed of 32 data bits and seven ECC bits. The ECC bits accompany all data transferred to or from the high-speed disks, and, on reading, are used to correct a single bit error in a byte and detect double and most multiple errors in a byte.
- ^'IBM Stretch (aka IBM 7030 Data Processing System)'. brouhaha.com. 2009-06-18. Retrieved 2015-01-17.
A typical IBM 7030 Data Processing System might have been comprised of the following units: [.] IBM 353 Disk Storage Unit – similar to IBM 1301 Disk File, but much faster. 2,097,152 (2^21) 72-bit words (64 data bits and 8 ECC bits), 125,000 words per second
- ^'Originally referred to as Redundant Array of Inexpensive Disks, the concept of RAID was first developed in the late 1980s by Patterson, Gibson, and Katz of the University of California at Berkeley. (The RAID Advisory Board has since substituted the term Inexpensive with Independent.)' Storage Area Network Fundamentals; Meeta Gupta; Cisco Press; ISBN978-1-58705-065-7; Appendix A.
- ^ abcdefghijChen, Peter; Lee, Edward; Gibson, Garth; Katz, Randy; Patterson, David (1994). 'RAID: High-Performance, Reliable Secondary Storage'. ACM Computing Surveys. 26 (2): 145–185. CiteSeerX10.1.1.41.3889. doi:10.1145/176979.176981.
- ^Donald, L. (2003). 'MCSA/MCSE 2006 JumpStart Computer and Network Basics' (2nd ed.). Glasgow: SYBEX.Cite journal requires
|journal=
(help) - ^Howe, Denis (ed.). Redundant Arrays of Independent Disks from FOLDOC. Free On-line Dictionary of Computing. Imperial College Department of Computing. Retrieved 2011-11-10.
- ^Dawkins, Bill and Jones, Arnold. 'Common RAID Disk Data Format Specification'Archived 2009-08-24 at the Wayback Machine[Storage Networking Industry Association] Colorado Springs, 28 July 2006. Retrieved on 22 February 2011.
- ^'Adaptec Hybrid RAID Solutions'(PDF). Adaptec.com. Adaptec. 2012. Retrieved 2013-09-07.
- ^'Common RAID Disk Drive Format (DDF) standard'. SNIA.org. SNIA. Retrieved 2012-08-26.
- ^'SNIA Dictionary'. SNIA.org. SNIA. Retrieved 2010-08-24.
- ^Andrew S. Tanenbaum. Structured Computer Organization 6th ed. p. 95.
- ^Hennessy, John; Patterson, David (2006). Computer Architecture: A Quantitative Approach, 4th ed. p. 362. ISBN978-0123704900.
- ^'FreeBSD Handbook, Chapter 20.5 GEOM: Modular Disk Transformation Framework'. Retrieved 2012-12-20.
- ^White, Jay; Lueth, Chris (May 2010). 'RAID-DP:NetApp Implementation of Double Parity RAID for Data Protection. NetApp Technical Report TR-3298'. Retrieved 2013-03-02.
- ^ abcNewman, Henry (2009-09-17). 'RAID's Days May Be Numbered'. EnterpriseStorageForum. Retrieved 2010-09-07.
- ^'Why RAID 6 stops working in 2019'. ZDNet. 22 February 2010.
- ^ abScott Lowe (2009-11-16). 'How to protect yourself from RAID-related Unrecoverable Read Errors (UREs). Techrepublic'. Retrieved 2012-12-01.
- ^Vijayan, S.; Selvamani, S.; Vijayan, S (1995). 'Dual-Crosshatch Disk Array: A Highly Reliable Hybrid-RAID Architecture'. Proceedings of the 1995 International Conference on Parallel Processing: Volume 1. CRC Press. pp. I–146ff. ISBN978-0-8493-2615-8.
- ^'What is RAID (Redundant Array of Inexpensive Disks)?'. StoneFly.com. Retrieved 2014-11-20.
- ^'Why is RAID 1+0 better than RAID 0+1?'. aput.net. Retrieved 2016-05-23.
- ^'RAID 10 Vs RAID 01 (RAID 1+0 Vs RAID 0+1) Explained with Diagram'. www.thegeekstuff.com. Retrieved 2016-05-23.
- ^'Comparing RAID 10 and RAID 01 | SMB IT Journal'. www.smbitjournal.com. Retrieved 2016-05-23.
- ^ abJeffrey B. Layton: 'Intro to Nested-RAID: RAID-01 and RAID-10', Linux Magazine, January 6, 2011
- ^'Performance, Tools & General Bone-Headed Questions'. tldp.org. Retrieved 2013-12-25.
- ^'Main Page – Linux-raid'. osdl.org. 2010-08-20. Archived from the original on 2008-07-05. Retrieved 2010-08-24.
- ^'Hdfs Raid'. Hadoopblog.blogspot.com. 2009-08-28. Retrieved 2010-08-24.
- ^ ab'3.8: 'Hackers of the Lost RAID''. OpenBSD Release Songs. OpenBSD. 2005-11-01. Retrieved 2019-03-23.
- ^Scott Long; Adaptec, Inc (2000). 'aac(4) — Adaptec AdvancedRAID Controller driver'. BSD Cross Reference. FreeBSD. Lay summary.
- ^Theo de Raadt (2005-09-09). 'RAID management support coming in OpenBSD 3.8'. misc@ (Mailing list). OpenBSD.
- ^Constantine A. Murenin (2010-05-21). '1.1. Motivation; 4. Sensor Drivers; 7.1. NetBSD envsys / sysmon'. OpenBSD Hardware Sensors — Environmental Monitoring and Fan Control (MMath thesis). University of Waterloo: UWSpace. hdl:10012/5234. Document ID: ab71498b6b1a60ff817b29d56997a418.
- ^'RAID over File System'. Retrieved 2014-07-22.
- ^'ZFS Raidz Performance, Capacity and Integrity'. calomel.org. Retrieved 26 June 2017.
- ^'ZFS -illumos'. illumos.org. 2014-09-15. Retrieved 2016-05-23.
- ^'Creating and Destroying ZFS Storage Pools – Oracle Solaris ZFS Administration Guide'. Oracle Corporation. 2012-04-01. Retrieved 2014-07-27.
- ^'20.2. The Z File System (ZFS)'. freebsd.org. Archived from the original on 2014-07-03. Retrieved 2014-07-27.
- ^'Double Parity RAID-Z (raidz2) (Solaris ZFS Administration Guide)'. Oracle Corporation. Retrieved 2014-07-27.
- ^'Triple Parity RAIDZ (raidz3) (Solaris ZFS Administration Guide)'. Oracle Corporation. Retrieved 2014-07-27.
- ^Deenadhayalan, Veera (2011). 'General Parallel File System (GPFS) Native RAID'(PDF). UseNix.org. IBM. Retrieved 2014-09-28.
- ^'Btrfs Wiki: Feature List'. 2012-11-07. Retrieved 2012-11-16.
- ^'Btrfs Wiki: Changelog'. 2012-10-01. Retrieved 2012-11-14.
- ^Philip Trautman; Jim Mostek. 'Scalability and Performance in Modern File Systems'. linux-xfs.sgi.com. Retrieved 2015-08-17.
- ^'Linux RAID Setup – XFS'. kernel.org. 2013-10-05. Retrieved 2015-08-17.
- ^Enterprise, Hewlett Packard. 'HPE Support document - HPE Support Center'. support.hpe.com.
- ^'Mac OS X: How to combine RAID sets in Disk Utility'. Retrieved 2010-01-04.
- ^'Apple Mac OS X Server File Systems'. Retrieved 2008-04-23.
- ^'FreeBSD System Manager's Manual page for GEOM(8)'. Retrieved 2009-03-19.
- ^'freebsd-geom mailing list – new class / geom_raid5'. Retrieved 2009-03-19.
- ^'FreeBSD Kernel Interfaces Manual for CCD(4)'. Retrieved 2009-03-19.
- ^'The Software-RAID HowTo'. Retrieved 2008-11-10.
- ^'mdadm(8) – Linux man page'. Linux.Die.net. Retrieved 2014-11-20.
- ^'Windows Vista support for large-sector hard disk drives'. Microsoft. 2007-05-29. Archived from the original on 2007-07-03. Retrieved 2007-10-08.
- ^'You cannot select or format a hard disk partition when you try to install Windows Vista, Windows 7 or Windows Server 2008 R2'. Microsoft. 14 September 2011. Archived from the original on 3 March 2011. Retrieved 17 December 2009.
- ^'Using Windows XP to Make RAID 5 Happen'. Tom's Hardware. Retrieved 24 August 2010.
- ^Sinofsky, Steven. 'Virtualizing storage for scale, resiliency, and efficiency'. Microsoft.
- ^Metzger, Perry (1999-05-12). 'NetBSD 1.4 Release Announcement'. NetBSD.org. The NetBSD Foundation. Retrieved 2013-01-30.
- ^'OpenBSD softraid man page'. OpenBSD.org. Retrieved 2018-02-03.
- ^'FreeBSD Handbook'. Chapter 19 GEOM: Modular Disk Transformation Framework. Retrieved 2009-03-19.
- ^'SATA RAID FAQ'. Ata.wiki.kernel.org. 2011-04-08. Retrieved 2012-08-26.
- ^'Red Hat Enterprise Linux – Storage Administrator Guide – RAID Types'. redhat.com.
- ^Charlie Russel; Sharon Crawford; Andrew Edney (2011). Working with Windows Small Business Server 2011 Essentials. O'Reilly Media, Inc. p. 90. ISBN978-0-7356-5670-3.
- ^Warren Block. '19.5. Software RAID Devices'. freebsd.org. Retrieved 2014-07-27.
- ^Ronald L. Krutz; James Conley (2007). Wiley Pathways Network Security Fundamentals. John Wiley & Sons. p. 422. ISBN978-0-470-10192-6.
- ^ abc'Hardware RAID vs. Software RAID: Which Implementation is Best for my Application? Adaptec Whitepaper'(PDF). adaptec.com.
- ^Gregory Smith (2010). PostgreSQL 9.0: High Performance. Packt Publishing Ltd. p. 31. ISBN978-1-84951-031-8.
- ^Ulf Troppens, Wolfgang Mueller-Friedt, Rainer Erkens, Rainer Wolafka, Nils Haustein. Storage Networks Explained: Basics and Application of Fibre Channel SAN, NAS, ISCSI, InfiniBand and FCoE. John Wiley and Sons, 2009. p.39
- ^Dell Computers, Background Patrol Read for Dell PowerEdge RAID Controllers, By Drew Habas and John Sieber, Reprinted from Dell Power Solutions, February 2006 http://www.dell.com/downloads/global/power/ps1q06-20050212-Habas.pdf
- ^ abc'Error Recovery Control with Smartmontools'. 2009. Archived from the original on September 28, 2011. Retrieved September 29, 2017.
- ^These studies are: Gray, J (1990), Murphy and Gent (1995), Kuhn (1997), and Enriquez P. (2003).
- ^Patterson, D., Hennessy, J. (2009), 574.
- ^'The RAID Migration Adventure'. Retrieved 2010-03-10.
- ^Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You? Bianca Schroeder and Garth A. Gibson
- ^ abHarris, Robin (2010-02-27). 'Does RAID 6 stop working in 2019?'. StorageMojo.com. TechnoQWAN. Retrieved 2013-12-17.
- ^J.L. Hafner, V. Dheenadhayalan, K. Rao, and J.A. Tomlin. 'Matrix methods for lost data reconstruction in erasure codes. USENIX Conference on File and Storage Technologies, Dec. 13–16, 2005.
- ^Miller, Scott Alan (2016-01-05). 'Understanding RAID Performance at Various Levels'. Recovery Zone. StorageCraft. Retrieved 2016-07-22.
- ^Art S. Kagel (March 2, 2011). 'RAID 5 versus RAID 10 (or even RAID 3, or RAID 4)'. miracleas.com. Archived from the original on November 3, 2014. Retrieved October 30, 2014.
- ^M.Baker, M.Shah, D.S.H. Rosenthal, M.Roussopoulos, P.Maniatis, T.Giuli, and P.Bungale. 'A fresh look at the reliability of long-term digital storage.' EuroSys2006, Apr. 2006.
- ^'L.N. Bairavasundaram, GR Goodson, S. Pasupathy, J.Schindler. 'An analysis of latent sector errors in disk drives'. Proceedings of SIGMETRICS'07, June 12–16,2007'(PDF).
- ^Patterson, D., Hennessy, J. (2009). Computer Organization and Design. New York: Morgan Kaufmann Publishers. pp 604–605.
- ^ abcLeventhal, Adam (2009-12-01). 'Triple-Parity RAID and Beyond. ACM Queue, Association of Computing Machinery'. Retrieved 2012-11-30.
- ^''Write hole' in RAID5, RAID6, RAID1, and other arrays'. ZAR team. Retrieved 15 February 2012.
- ^'ANNOUNCE: mdadm 3.4 - A tool for managing md Soft RAID under Linux [LWN.net]'. lwn.net.
- ^'A journal for MD/RAID5 [LWN.net]'. lwn.net.
- ^Jim Gray: The Transaction Concept: Virtues and LimitationsArchived 2008-06-11 at the Wayback Machine (Invited Paper) VLDB 1981: 144–154
- ^'Definition of write-back cache at SNIA dictionary'.
Linux Raid 5
External links[edit]
Wikimedia Commons has media related to Redundant array of independent disks. |
- 'Empirical Measurements of Disk Failure Rates and Error Rates', by Jim Gray and Catharine van Ingen, December 2005
- The Mathematics of RAID-6, by H. Peter Anvin
- Does Fake RAID Offer Any Advantage Over Software RAID? – Discussion on superuser.com
- Comparing RAID Implementation Methods – Dell.com
- BAARF: Battle Against Any Raid Five (RAID 3, 4 and 5 versus RAID 10)
Linux Raid Software
Retrieved from 'https://en.wikipedia.org/w/index.php?title=RAID&oldid=918714108#SOFTWARE'