source: postlfs/filesystems/aboutraid.xml@ 45ab6c7

11.0 11.1 11.2 11.3 12.0 12.1 kea ken/TL2024 ken/inkscape-core-mods ken/tuningfonts lazarus lxqt plabs/newcss plabs/python-mods python3.11 qt5new rahul/power-profiles-daemon renodr/vulkan-addition trunk upgradedb xry111/intltool xry111/llvm18 xry111/soup3 xry111/test-20220226 xry111/xf86-video-removal
Last change on this file since 45ab6c7 was 45ab6c7, checked in by Xi Ruoyao <xry111@…>, 3 years ago

more SVN prop clean up

Remove "$LastChanged$" everywhere, and also some unused $Date$

  • Property mode set to 100644
File size: 8.3 KB
Line 
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE sect1 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
3 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
4 <!ENTITY % general-entities SYSTEM "../../general.ent">
5 %general-entities;
6]>
7
8<sect1 id="raid">
9 <?dbhtml filename="raid.html"?>
10
11 <sect1info>
12 <date>$Date$</date>
13 </sect1info>
14
15 <title>About RAID</title>
16
17 <para>
18 The storage technology known as RAID (Redundant Array of
19 Independent Disks) combines multiple physical disks into a logical
20 unit. The drives can generally be combined to provide data
21 redundancy or to extend the size of logical units beyond the
22 capability of the physical disks or both. The technology
23 also allows for providing hardware maintenance without powering
24 down the system.
25 </para>
26
27 <para>
28 The types of RAID organization are described in the <ulink
29 url="https://raid.wiki.kernel.org/index.php/Overview#The_RAID_levels">
30 RAID Wiki</ulink>.
31 </para>
32
33 <para>
34 Note that while RAID provides protection against disk
35 failures, it is not a substitute for backups. A file deleted
36 is still deleted on all the disks of a RAID array. Modern backups
37 are generally done via <xref linkend='rsync'/>.
38 </para>
39
40 <para>
41 There are three major types of RAID implementation:
42 Hardware RAID, BIOS-based RAID, and Software RAID.
43 </para>
44
45 <sect2 id="hwraid">
46 <title>Hardware RAID</title>
47 <para>
48 Hardware based RAID provides capability through proprietary
49 hardware and data layouts. The control and configuration is generally
50 done via firmware in conjunction with executable programs made
51 available by the device manufacturer. The capabilities are
52 generally supplied via a PCI card, although there are some
53 instances of RAID components integrated in to the motherboard.
54 Hardware RAID may also be available in a stand-alone enclosure.
55 </para>
56
57 <para>
58 One advantage of hardware-based RAID is that the drives
59 are offered to the operating system as a logical drive and no
60 operating system dependent configuration is needed.
61 </para>
62
63 <para>
64 Disadvantages include difficulties in transferring drives
65 from one system to another, updating firmware, or replacing
66 failed RAID hardware.
67 </para>
68
69 </sect2>
70
71 <sect2 id="biosraid">
72 <title>BIOS-based RAID</title>
73
74 <para>
75 Some computers offter a hardware-like RAID implementation in the
76 system BIOS. Sometime this is referred to as 'fake' RAID as the
77 capabilites are generally incorporated into firmware without any hardware
78 acceleration.
79 </para>
80
81 <para>
82 The advantages and disadvantages of BIOS-based RAID are generally
83 the same as hardware RAID with the additional disadvantage that there
84 is no hardware acceleration.
85 </para>
86
87 <para>
88 In some cases, BIOS-based RAID firmware is enabled by default (e.g.
89 some DELL systems). If software RAID is desired, this option must be
90 explicitly disabled in the BIOS.
91 </para>
92
93 </sect2>
94
95 <sect2 id="swraid">
96 <title>Software RAID</title>
97 <para>
98 Software based RAID is the most flexible form of RAID. It is
99 easy to install and update and provides full capability on all or
100 part of any drives available to the system. In BLFS, the RAID software
101 is found in <xref linkend='mdadm'/>.
102 </para>
103
104 <para>
105 Configuring a RAID device is straight forward using
106 <application>mdadm</application>. Generally devices are created in the
107 <filename class='directory'>/dev</filename> directory as
108 <filename>/dev/mdx</filename> where <emphasis>x</emphasis> is an integer.
109 </para>
110
111 <para>
112 The first step in creating a RAID array is to use partitioning software
113 such as <userinput>fdisk</userinput> or <xref linkend='parted'/> to
114 define the partitions needed for the array. Usually, there will be
115 one partition on each drive participating in the RAID array, but that
116 is not strictly necessary. For this example, there will be four disk
117 drives:
118 <filename>/dev/sda</filename>,
119 <filename>/dev/sdb</filename>,
120 <filename>/dev/sdc</filename>, and
121 <filename>/dev/sdd</filename>. They will be partitioned as follows:
122 </para>
123
124<screen><literal>Partition Size Type Use
125sda1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
126sda2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
127sda3: 2 GB 83 Linux swap swap
128sda4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
129
130sdb1: 100 MB fd Linux raid auto /boot (RAID 1) /dev/md0
131sdb2: 10 GB fd Linux raid auto / (RAID 1) /dev/md1
132sdb3: 2 GB 83 Linux swap swap
133sdb4 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
134
135sdc1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
136sdc2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2
137
138sdd1: 12 GB fd Linux raid auto /usr/src (RAID 0) /dev/md3
139sdd2: 300 GB fd Linux raid auto /home (RAID 5) /dev/md2 </literal></screen>
140
141 <para>
142 In this arrangement, a separate boot partition is created as the
143 first small RAID array and a root filesystem as the secong RAID array,
144 both mirrored. The third partition is a large (about 1TB) array for the
145 <filename class='directory'>/home</filename> directory. This provides
146 an ability to stripe data across multiple devices, improving speed for
147 both reading and writing large files. Finally, a fourth array is created
148 that concatenates two partitions into a larger device.
149 </para>
150
151 <note>
152 <para>
153 All <application>mdadm</application> commands must be run
154 as the <systemitem class="username">root</systemitem> user.
155 </para>
156 </note>
157
158 <para>
159 To create these RAID arrays the commands are:
160 </para>
161
162<screen><userinput>/sbin/mdadm -Cv /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
163/sbin/mdadm -Cv /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2
164/sbin/mdadm -Cv /dev/md3 --level=0 --raid-devices=2 /dev/sdc1 /dev/sdd1
165/sbin/mdadm -Cv /dev/md2 --level=5 --raid-devices=4 \
166 /dev/sda4 /dev/sdb4 /dev/sdc2 /dev/sdd2 </userinput></screen>
167
168 <para>
169 The devices created can be examined by device. For example,
170 to see the details of <filename>/dev/md1</filename>, use
171 <userinput>/sbin/mdadm --detail /dev/md1</userinput>:
172 </para>
173
174<screen><literal> Version : 1.2
175 Creation Time : Tue Feb 7 17:08:45 2012
176 Raid Level : raid1
177 Array Size : 10484664 (10.00 GiB 10.74 GB)
178 Used Dev Size : 10484664 (10.00 GiB 10.74 GB)
179 Raid Devices : 2
180 Total Devices : 2
181 Persistence : Superblock is persistent
182
183 Update Time : Tue Feb 7 23:11:53 2012
184 State : clean
185 Active Devices : 2
186Working Devices : 2
187 Failed Devices : 0
188 Spare Devices : 0
189
190 Name : core2-blfs:0 (local to host core2-blfs)
191 UUID : fcb944a4:9054aeb2:d987d8fe:a89121f8
192 Events : 17
193
194 Number Major Minor RaidDevice State
195 0 8 1 0 active sync /dev/sda1
196 1 8 17 1 active sync /dev/sdb1</literal></screen>
197
198 <para>
199 From this point, the partitions can be formated with the filesystem of
200 choice (e.g. ext3, ext4, <xref linkend='xfsprogs'/>, <xref
201 linkend='reiserfs'/>, etc). The formatted partitions can then be
202 mounted. The <filename>/etc/fstab</filename> file can use the devices
203 created for mounting at boot time and the linux command line in
204 <filename>/boot/grub/grub.cfg</filename> can specify
205 <option>root=/dev/md1</option>.
206 </para>
207
208 <note>
209 <para>
210 The swap devices should be specified in the
211 <filename>/etc/fstab</filename> file as normal. The kernel normally
212 stripes swap data across multiple swap files and should not be made
213 part of a RAID array.
214 </para>
215 </note>
216
217 <para>
218 For further options and management details of RAID devices, refer to
219 <userinput>man mdadm</userinput>.
220 </para>
221
222 <para>
223 Additional details for monitoring RAID arrays and dealing with
224 problems can be found at the <ulink
225 url="https://raid.wiki.kernel.org/index.php/Linux_Raid">Linux RAID
226 Wiki</ulink>.
227 </para>
228
229 </sect2>
230
231</sect1>
Note: See TracBrowser for help on using the repository browser.