1 | <?xml version="1.0" encoding="UTF-8"?>
|
---|
2 | <!DOCTYPE sect1 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
|
---|
3 | "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
|
---|
4 | <!ENTITY % general-entities SYSTEM "../../general.ent">
|
---|
5 | %general-entities;
|
---|
6 | ]>
|
---|
7 |
|
---|
8 | <sect1 id="raid">
|
---|
9 | <?dbhtml filename="raid.html"?>
|
---|
10 |
|
---|
11 |
|
---|
12 | <title>About RAID</title>
|
---|
13 |
|
---|
14 | <para>
|
---|
15 | The storage technology known as RAID (Redundant Array of
|
---|
16 | Independent Disks) combines multiple physical disks into a logical
|
---|
17 | unit. The drives can generally be combined to provide data
|
---|
18 | redundancy or to extend the size of logical units beyond the
|
---|
19 | capability of the physical disks or both. The technology
|
---|
20 | also allows for providing hardware maintenance without powering
|
---|
21 | down the system.
|
---|
22 | </para>
|
---|
23 |
|
---|
24 | <para>
|
---|
25 | The types of RAID organization are described in the <ulink
|
---|
26 | url="https://raid.wiki.kernel.org/index.php/Overview#The_RAID_levels">
|
---|
27 | RAID Wiki</ulink>.
|
---|
28 | </para>
|
---|
29 |
|
---|
30 | <para>
|
---|
31 | Note that while RAID provides protection against disk
|
---|
32 | failures, it is not a substitute for backups. A file deleted
|
---|
33 | is still deleted on all the disks of a RAID array. Modern backups
|
---|
34 | are generally done via <xref linkend='rsync'/>.
|
---|
35 | </para>
|
---|
36 |
|
---|
37 | <para>
|
---|
38 | There are three major types of RAID implementation:
|
---|
39 | Hardware RAID, BIOS-based RAID, and Software RAID.
|
---|
40 | </para>
|
---|
41 |
|
---|
42 | <sect2 id="hwraid">
|
---|
43 | <title>Hardware RAID</title>
|
---|
44 | <para>
|
---|
45 | Hardware based RAID provides capability through proprietary
|
---|
46 | hardware and data layouts. The control and configuration is generally
|
---|
47 | done via firmware in conjunction with executable programs made
|
---|
48 | available by the device manufacturer. The capabilities are
|
---|
49 | generally supplied via a PCI card, although there are some
|
---|
50 | instances of RAID components integrated in to the motherboard.
|
---|
51 | Hardware RAID may also be available in a stand-alone enclosure.
|
---|
52 | </para>
|
---|
53 |
|
---|
54 | <para>
|
---|
55 | One advantage of hardware-based RAID is that the drives
|
---|
56 | are offered to the operating system as a logical drive and no
|
---|
57 | operating system dependent configuration is needed.
|
---|
58 | </para>
|
---|
59 |
|
---|
60 | <para>
|
---|
61 | Disadvantages include difficulties in transferring drives
|
---|
62 | from one system to another, updating firmware, or replacing
|
---|
63 | failed RAID hardware.
|
---|
64 | </para>
|
---|
65 |
|
---|
66 | </sect2>
|
---|
67 |
|
---|
68 | <sect2 id="biosraid">
|
---|
69 | <title>BIOS-based RAID</title>
|
---|
70 |
|
---|
71 | <para>
|
---|
72 | Some computers offer a hardware-like RAID implementation in the
|
---|
73 | system BIOS. Sometime this is referred to as 'fake' RAID as the
|
---|
74 | capabilities are generally incorporated into firmware without any hardware
|
---|
75 | acceleration.
|
---|
76 | </para>
|
---|
77 |
|
---|
78 | <para>
|
---|
79 | The advantages and disadvantages of BIOS-based RAID are generally
|
---|
80 | the same as hardware RAID with the additional disadvantage that there
|
---|
81 | is no hardware acceleration.
|
---|
82 | </para>
|
---|
83 |
|
---|
84 | <para>
|
---|
85 | In some cases, BIOS-based RAID firmware is enabled by default (e.g.
|
---|
86 | some DELL systems). If software RAID is desired, this option must be
|
---|
87 | explicitly disabled in the BIOS.
|
---|
88 | </para>
|
---|
89 |
|
---|
90 | </sect2>
|
---|
91 |
|
---|
92 | <sect2 id="swraid">
|
---|
93 | <title>Software RAID</title>
|
---|
94 | <para>
|
---|
95 | Software based RAID is the most flexible form of RAID. It is
|
---|
96 | easy to install and update and provides full capability on all or
|
---|
97 | part of any drives available to the system. In BLFS, the RAID software
|
---|
98 | is found in <xref linkend='mdadm'/>.
|
---|
99 | </para>
|
---|
100 |
|
---|
101 | <para>
|
---|
102 | Configuring a RAID device is straightforward using
|
---|
103 | <application>mdadm</application>. Generally devices are created in the
|
---|
104 | <filename class='directory'>/dev</filename> directory as
|
---|
105 | <filename>/dev/mdx</filename> where <emphasis>x</emphasis> is an integer.
|
---|
106 | </para>
|
---|
107 |
|
---|
108 | <para>
|
---|
109 | The first step in creating a RAID array is to use partitioning software
|
---|
110 | such as <userinput>fdisk</userinput> or <xref linkend='parted'/> to
|
---|
111 | define the partitions needed for the array. Usually, there will be
|
---|
112 | one partition on each drive participating in the RAID array, but that
|
---|
113 | is not strictly necessary. For this example, there will be four disk
|
---|
114 | drives:
|
---|
115 | <filename>/dev/sda</filename>,
|
---|
116 | <filename>/dev/sdb</filename>,
|
---|
117 | <filename>/dev/sdc</filename>, and
|
---|
118 | <filename>/dev/sdd</filename>. They will be partitioned as follows:
|
---|
119 | </para>
|
---|
120 |
|
---|
121 | <screen><literal>Partition Size Type Use
|
---|
122 | sda1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
|
---|
123 | sda2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
|
---|
124 | sda3: 2 GB 83 Linux swap swap
|
---|
125 | sda4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
|
---|
126 |
|
---|
127 | sdb1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
|
---|
128 | sdb2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
|
---|
129 | sdb3: 2 GB 83 Linux swap swap
|
---|
130 | sdb4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
|
---|
131 |
|
---|
132 | sdc1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
|
---|
133 | sdc2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
|
---|
134 |
|
---|
135 | sdd1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
|
---|
136 | sdd2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2 </literal></screen>
|
---|
137 |
|
---|
138 | <para>
|
---|
139 | In this arrangement, a separate boot partition is created as the
|
---|
140 | first small RAID array and a root filesystem as the secong RAID array,
|
---|
141 | both mirrored. The third partition is a large (about 1TB) array for the
|
---|
142 | <filename class='directory'>/home</filename> directory. This provides
|
---|
143 | an ability to stripe data across multiple devices, improving speed for
|
---|
144 | both reading and writing large files. Finally, a fourth array is created
|
---|
145 | that concatenates two partitions into a larger device.
|
---|
146 | </para>
|
---|
147 |
|
---|
148 | <note>
|
---|
149 | <para>
|
---|
150 | All <application>mdadm</application> commands must be run
|
---|
151 | as the <systemitem class="username">root</systemitem> user.
|
---|
152 | </para>
|
---|
153 | </note>
|
---|
154 |
|
---|
155 | <para>
|
---|
156 | To create these RAID arrays the commands are:
|
---|
157 | </para>
|
---|
158 |
|
---|
159 | <screen><userinput>/sbin/mdadm -Cv /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
|
---|
160 | /sbin/mdadm -Cv /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2
|
---|
161 | /sbin/mdadm -Cv /dev/md3 --level=0 --raid-devices=2 /dev/sdc1 /dev/sdd1
|
---|
162 | /sbin/mdadm -Cv /dev/md2 --level=5 --raid-devices=4 \
|
---|
163 | /dev/sda4 /dev/sdb4 /dev/sdc2 /dev/sdd2 </userinput></screen>
|
---|
164 |
|
---|
165 | <para>
|
---|
166 | The devices created can be examined by device. For example,
|
---|
167 | to see the details of <filename>/dev/md1</filename>, use
|
---|
168 | <userinput>/sbin/mdadm --detail /dev/md1</userinput>:
|
---|
169 | </para>
|
---|
170 |
|
---|
171 | <screen><literal> Version : 1.2
|
---|
172 | Creation Time : Tue Feb 7 17:08:45 2012
|
---|
173 | Raid Level : raid1
|
---|
174 | Array Size : 10484664 (10.00 GiB 10.74 GB)
|
---|
175 | Used Dev Size : 10484664 (10.00 GiB 10.74 GB)
|
---|
176 | Raid Devices : 2
|
---|
177 | Total Devices : 2
|
---|
178 | Persistence : Superblock is persistent
|
---|
179 |
|
---|
180 | Update Time : Tue Feb 7 23:11:53 2012
|
---|
181 | State : clean
|
---|
182 | Active Devices : 2
|
---|
183 | Working Devices : 2
|
---|
184 | Failed Devices : 0
|
---|
185 | Spare Devices : 0
|
---|
186 |
|
---|
187 | Name : core2-blfs:0 (local to host core2-blfs)
|
---|
188 | UUID : fcb944a4:9054aeb2:d987d8fe:a89121f8
|
---|
189 | Events : 17
|
---|
190 |
|
---|
191 | Number Major Minor RaidDevice State
|
---|
192 | 0 8 1 0 active sync /dev/sda1
|
---|
193 | 1 8 17 1 active sync /dev/sdb1</literal></screen>
|
---|
194 |
|
---|
195 | <para>
|
---|
196 | From this point, the partitions can be formatted with the filesystem of
|
---|
197 | choice (e.g. ext3, ext4, <xref linkend='xfsprogs'/>, etc). The formatted
|
---|
198 | partitions can then be
|
---|
199 | mounted. The <filename>/etc/fstab</filename> file can use the devices
|
---|
200 | created for mounting at boot time and the linux command line in
|
---|
201 | <filename>/boot/grub/grub.cfg</filename> can specify
|
---|
202 | <option>root=/dev/md1</option>.
|
---|
203 | </para>
|
---|
204 |
|
---|
205 | <note>
|
---|
206 | <para>
|
---|
207 | The swap devices should be specified in the
|
---|
208 | <filename>/etc/fstab</filename> file as normal. The kernel normally
|
---|
209 | stripes swap data across multiple swap files and should not be made
|
---|
210 | part of a RAID array.
|
---|
211 | </para>
|
---|
212 | </note>
|
---|
213 |
|
---|
214 | <para>
|
---|
215 | For further options and management details of RAID devices, refer to
|
---|
216 | <userinput>man mdadm</userinput>.
|
---|
217 | </para>
|
---|
218 |
|
---|
219 | <para>
|
---|
220 | Additional details for monitoring RAID arrays and dealing with
|
---|
221 | problems can be found at the <ulink
|
---|
222 | url="https://raid.wiki.kernel.org/index.php/Linux_Raid">Linux RAID
|
---|
223 | Wiki</ulink>.
|
---|
224 | </para>
|
---|
225 |
|
---|
226 | </sect2>
|
---|
227 |
|
---|
228 | </sect1>
|
---|