Ignore:
Timestamp:
03/25/2020 09:46:27 PM (4 years ago)
Author:
Pierre Labastie <pieere@…>
Branches:
10.0, 10.1, 11.0, 11.1, 11.2, 11.3, 12.0, 12.1, kea, ken/TL2024, ken/inkscape-core-mods, ken/tuningfonts, lazarus, lxqt, plabs/newcss, plabs/python-mods, python3.11, qt5new, rahul/power-profiles-daemon, renodr/vulkan-addition, trunk, upgradedb, xry111/intltool, xry111/llvm18, xry111/soup3, xry111/test-20220226, xry111/xf86-video-removal
Children:
f716ef4
Parents:
9bd10279
Message:

format filesystems chapter

git-svn-id: svn://svn.linuxfromscratch.org/BLFS/trunk/BOOK@22895 af4574ff-66df-0310-9fd7-8a98e5e911e0

File:
1 edited

Legend:

Unmodified
Added
Removed
  • postlfs/filesystems/aboutraid.xml

    r9bd10279 r29244b7  
    1616  <title>About RAID</title>
    1717
    18   <para>The storage technology known as RAID (Redundant Array of
    19   Independent Disks) combines multiple physical disks into a logical
    20   unit.  The drives can generally be combined to provide data
    21   redundancy or to extend the size of logical units beyond the
    22   capability of the physical disks or both.  The technology
    23   also allows for providing hardware maintenance without powering
    24   down the system.</para>
    25 
    26   <para>The types of RAID organization are described in the <ulink
    27   url="https://raid.wiki.kernel.org/index.php/Overview#The_RAID_levels">
    28   RAID Wiki</ulink>.</para>
    29 
    30   <para>Note that while RAID provides protection against disk
    31   failures, it is not a substitute for backups.  A file deleted
    32   is still deleted on all the disks of a RAID array.  Modern backups
    33   are generally done via <xref linkend='rsync'/>.</para>
    34 
    35   <para>There are three major types of RAID implementation:
    36   Hardware RAID, BIOS-based RAID, and Software RAID.</para>
     18  <para>
     19    The storage technology known as RAID (Redundant Array of
     20    Independent Disks) combines multiple physical disks into a logical
     21    unit.  The drives can generally be combined to provide data
     22    redundancy or to extend the size of logical units beyond the
     23    capability of the physical disks or both.  The technology
     24    also allows for providing hardware maintenance without powering
     25    down the system.
     26  </para>
     27
     28  <para>
     29    The types of RAID organization are described in the <ulink
     30    url="https://raid.wiki.kernel.org/index.php/Overview#The_RAID_levels">
     31    RAID Wiki</ulink>.
     32  </para>
     33
     34  <para>
     35    Note that while RAID provides protection against disk
     36    failures, it is not a substitute for backups.  A file deleted
     37    is still deleted on all the disks of a RAID array.  Modern backups
     38    are generally done via <xref linkend='rsync'/>.
     39  </para>
     40
     41  <para>
     42    There are three major types of RAID implementation:
     43    Hardware RAID, BIOS-based RAID, and Software RAID.
     44  </para>
    3745
    3846  <sect2 id="hwraid">
    3947    <title>Hardware RAID</title>
    40     <para>Hardware based RAID provides capability through proprietary
    41     hardware and data layouts.  The control and configuration is generally
    42     done via firmware in conjunction with executable programs made
    43     available by the device manufacturer.  The capabilities are
    44     generally supplied via a PCI card, although there are some
    45     instances of RAID components integrated in to the motherboard.
    46     Hardware RAID may also be available in a stand-alone enclosure.</para>
    47 
    48     <para>One advantage of hardware-based RAID is that the drives
    49     are offered to the operating system as a logical drive and no
    50     operating system dependent configuration is needed.</para>
    51 
    52     <para>Disadvantages include difficulties in transferring drives
    53     from one system to another, updating firmware, or replacing
    54     failed RAID hardware.</para>
     48    <para>
     49      Hardware based RAID provides capability through proprietary
     50      hardware and data layouts.  The control and configuration is generally
     51      done via firmware in conjunction with executable programs made
     52      available by the device manufacturer.  The capabilities are
     53      generally supplied via a PCI card, although there are some
     54      instances of RAID components integrated in to the motherboard.
     55      Hardware RAID may also be available in a stand-alone enclosure.
     56    </para>
     57
     58    <para>
     59      One advantage of hardware-based RAID is that the drives
     60      are offered to the operating system as a logical drive and no
     61      operating system dependent configuration is needed.
     62    </para>
     63
     64    <para>
     65      Disadvantages include difficulties in transferring drives
     66      from one system to another, updating firmware, or replacing
     67      failed RAID hardware.
     68    </para>
    5569
    5670  </sect2>
     
    5973    <title>BIOS-based RAID</title>
    6074
    61     <para>Some computers offter a hardware-like RAID implementation in the
    62     system BIOS.  Sometime this is referred to as 'fake' RAID as the
    63     capabilites are generally incorporated into firmware without any hardware
    64     acceleration.</para>
    65 
    66     <para>The advantages and disadvantages of BIOS-based RAID are generally
    67     the same as hardware RAID with the additional disadvantage that there
    68     is no hardware acceleration.</para>
    69 
    70     <para>In some cases, BIOS-based RAID firmware is enabled by default (e.g.
    71     some DELL systems).  If software RAID is desired, this option must be
    72     explicitly disabled in the BIOS.</para>
     75    <para>
     76      Some computers offter a hardware-like RAID implementation in the
     77      system BIOS.  Sometime this is referred to as 'fake' RAID as the
     78      capabilites are generally incorporated into firmware without any hardware
     79      acceleration.
     80    </para>
     81
     82    <para>
     83      The advantages and disadvantages of BIOS-based RAID are generally
     84      the same as hardware RAID with the additional disadvantage that there
     85      is no hardware acceleration.
     86    </para>
     87
     88    <para>
     89      In some cases, BIOS-based RAID firmware is enabled by default (e.g.
     90      some DELL systems).  If software RAID is desired, this option must be
     91      explicitly disabled in the BIOS.
     92    </para>
    7393
    7494  </sect2>
     
    7696  <sect2 id="swraid">
    7797  <title>Software RAID</title>
    78     <para>Software based RAID is the most flexible form of RAID.  It is
    79     easy to install and update and provides full capability on all or
    80     part of any drives available to the system.  In BLFS, the RAID software
    81     is found in <xref linkend='mdadm'/>.</para>
    82 
    83     <para>Configuring a RAID device is straight forward using
    84     <application>mdadm</application>.  Generally devices are created in the
    85     <filename class='directory'>/dev</filename> directory as
    86     <filename>/dev/mdx</filename> where <emphasis>x</emphasis> is an integer.
    87     </para>
    88 
    89     <para>The first step in creating a RAID array is to use partitioning software
    90     such as <userinput>fdisk</userinput> or <xref linkend='parted'/> to define the
    91     partitions needed for the array.  Usually, there will be one partition on
    92     each drive participating in the RAID array, but that is not strictly necessary.
    93     For this example, there will be four disk drives:
    94     <filename>/dev/sda</filename>,
    95     <filename>/dev/sdb</filename>,
    96     <filename>/dev/sdc</filename>, and
    97     <filename>/dev/sdd</filename>.  They will be partitioned as follows:</para>
     98    <para>
     99      Software based RAID is the most flexible form of RAID.  It is
     100      easy to install and update and provides full capability on all or
     101      part of any drives available to the system.  In BLFS, the RAID software
     102      is found in <xref linkend='mdadm'/>.
     103    </para>
     104
     105    <para>
     106      Configuring a RAID device is straight forward using
     107      <application>mdadm</application>.  Generally devices are created in the
     108      <filename class='directory'>/dev</filename> directory as
     109      <filename>/dev/mdx</filename> where <emphasis>x</emphasis> is an integer.
     110    </para>
     111
     112    <para>
     113      The first step in creating a RAID array is to use partitioning software
     114      such as <userinput>fdisk</userinput> or <xref linkend='parted'/> to
     115      define the partitions needed for the array.  Usually, there will be
     116      one partition on each drive participating in the RAID array, but that
     117      is not strictly necessary.  For this example, there will be four disk
     118      drives:
     119      <filename>/dev/sda</filename>,
     120      <filename>/dev/sdb</filename>,
     121      <filename>/dev/sdc</filename>, and
     122      <filename>/dev/sdd</filename>.  They will be partitioned as follows:
     123    </para>
    98124
    99125<screen><literal>Partition Size     Type                Use
     
    114140sdd2:     300 GB   fd Linux raid auto  /home    (RAID 5) /dev/md2 </literal></screen>
    115141
    116     <para>Is this arrangement, a separate boot partition is created as the
    117     first small RAID array and a root filesystem as the secong RAID array,
    118     both mirrored.  The third partition is a large (about 1TB) array for the
    119     <filename class='directory'>/home</filename> directory.  This provides
    120     an ability to stripe data across multiple devices, improving speed for
    121     botih reading and writing large files.  Finally, a fourth array is created
    122     that concatenates two partitions into a larger device.</para>
    123 
    124     <note><para>All <application>mdadm</application> commands must be run
    125     as the <systemitem class="username">root</systemitem> user.</para></note>
    126 
    127     <para>To create these RAID arrays the commands are:</para>
     142    <para>
     143      In this arrangement, a separate boot partition is created as the
     144      first small RAID array and a root filesystem as the secong RAID array,
     145      both mirrored.  The third partition is a large (about 1TB) array for the
     146      <filename class='directory'>/home</filename> directory.  This provides
     147      an ability to stripe data across multiple devices, improving speed for
     148      both reading and writing large files.  Finally, a fourth array is created
     149      that concatenates two partitions into a larger device.
     150    </para>
     151
     152    <note>
     153      <para>
     154        All <application>mdadm</application> commands must be run
     155        as the <systemitem class="username">root</systemitem> user.
     156      </para>
     157    </note>
     158
     159    <para>
     160      To create these RAID arrays the commands are:
     161    </para>
    128162
    129163<screen><userinput>/sbin/mdadm -Cv /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
     
    133167        /dev/sda4 /dev/sdb4 /dev/sdc2 /dev/sdd2 </userinput></screen>
    134168
    135     <para>The devices created can be examined by device.  For example,
    136     to see the details of <filename>/dev/md1</filename>, use
    137     <userinput>/sbin/mdadm --detail /dev/md1</userinput>:  </para>
     169    <para>
     170      The devices created can be examined by device.  For example,
     171      to see the details of <filename>/dev/md1</filename>, use
     172      <userinput>/sbin/mdadm --detail /dev/md1</userinput>:
     173    </para>
    138174
    139175<screen><literal>        Version : 1.2
     
    161197       1       8       17        1      active sync   /dev/sdb1</literal></screen>
    162198
    163    <para>From this point, the partitions can be formated with the filesystem of
    164    choice (e.g. ext3, ext4, <xref linkend='xfsprogs'/>, <xref linkend='reiserfs'/>,
    165    etc).  The formatted partitions can then be mounted.  The
    166    <filename>/etc/fstab</filename> file can use the devices created for mounting at
    167    boot time and the linux command line in
    168    <filename>/boot/grub/grub.cfg</filename> can specify
    169    <option>root=/dev/md1</option>.</para>
    170 
    171    <note><para>The swap devices should be specified in the <filename>/etc/fstab</filename>
    172    file as normal.  The kernel normally stripes swap data across multiple swap
    173    files and should not be made part of a RAID array.</para></note>
    174 
    175    <para>For further options and management details of RAID devices, refer to
    176    <userinput>man mdadm</userinput>.</para>
    177 
    178    <para>Additional details for monitoring RAID arrays and dealing with
    179    problems can be found at the <ulink
    180    url="https://raid.wiki.kernel.org/index.php/Linux_Raid">Linux RAID
    181    Wiki</ulink>.</para>
     199    <para>
     200      From this point, the partitions can be formated with the filesystem of
     201      choice (e.g. ext3, ext4, <xref linkend='xfsprogs'/>, <xref
     202      linkend='reiserfs'/>, etc).  The formatted partitions can then be
     203      mounted.  The <filename>/etc/fstab</filename> file can use the devices
     204      created for mounting at boot time and the linux command line in
     205      <filename>/boot/grub/grub.cfg</filename> can specify
     206      <option>root=/dev/md1</option>.
     207    </para>
     208
     209    <note>
     210      <para>
     211        The swap devices should be specified in the
     212        <filename>/etc/fstab</filename> file as normal.  The kernel normally
     213        stripes swap data across multiple swap files and should not be made
     214        part of a RAID array.
     215      </para>
     216    </note>
     217
     218    <para>
     219      For further options and management details of RAID devices, refer to
     220      <userinput>man mdadm</userinput>.
     221    </para>
     222
     223    <para>
     224      Additional details for monitoring RAID arrays and dealing with
     225      problems can be found at the <ulink
     226      url="https://raid.wiki.kernel.org/index.php/Linux_Raid">Linux RAID
     227      Wiki</ulink>.
     228    </para>
    182229
    183230  </sect2>
Note: See TracChangeset for help on using the changeset viewer.