[root@szm test]# mdadm --help-options
Any parameter that does not start with '-' is treated as a device name
or, for --examine-bitmap, a file name.
The first such name is often the name of an md device. Subsequent
names are often names of component devices.
Some common options are:
--help -h : General help message or, after above option,
mode specific help message
--help-options : This help message
--version -V : Print version information for mdadm
--verbose -v : Be more verbose about what is happening
--quiet -q : Don't print un-necessary messages
--brief -b : Be less verbose, more brief
--export -Y : With --detail, use key=value format for easy
import into environment
--force -f : Override normal checks and be more forceful
--assemble -A : Assemble an array----组建一个已经存在的阵列
--build -B : Build an array without metadata
--create -C : Create a new array
--detail -D : Display details of an array
--examine -E : Examine superblock on an array component
--examine-bitmap -X: Display the detail of a bitmap file
--monitor -F : monitor (follow) some arrays
--grow -G : resize/ reshape and array---调整阵列大小
--incremental -I : add/remove a single device to/from an array as appropriate
--query -Q : Display general information about how a
device relates to the md driver
--auto-detect : Start arrays auto-detected by the kernel
--offroot : Set first character of argv[0] to @ to indicate the
application was launched from initrd/initramfs and
should not be shutdown by systemd as part of the
regular shutdown process.
-f:强制执行一个操作
s:扫描已激活阵列的扩展信息
x:指定热备磁盘的个数
l:指定阵列级别
-a:创建阵列的时候按用户回答的 命令执行相关操作
-y:将所有事件通过syslog服务记录日志信息;
配置文件:/etc/mdadm简单说明:man mdadm DEVICE:参与陈列的硬盘或分区设备名称,如/dev/sdb1;
ARRAY:定义所管理陈列的设备和参与这个阵列的磁盘成员,后
spare-group:设定阵列中热备磁盘的共享组;
PROGRAM:用户在远行mdadm -monitor阵列监控命令的时候所生成的事件信息;
Create:用户在创建阵列的时候设置的值,如“-a yes”提示系统创建阵列设备文件;
HOMEHOST:与-homehost参数功能相同
[root@szm test]# mdadm -C /dev/md0 -a yes -l0 -n2 /dev/sdb1 /dev/sdb2
----( RAID 0) mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@szm test]# watch -n1 'cat /proc/mdstat',可以实时查看内在中的RAID状态信息。这样就可以监控RAIN设备的创建过程 |
[root@szm pub]# mdadm -D /dev/md0
Creation Time : Mon Mar 11 20:31:05 2013
Array Size : 320512 (313.05 MiB 328.20 MB) Persistence : Superblock is persistent
Update Time : Mon Mar 11 20:31:05 2013
Name : szm:0 (local to host szm)
UUID : 40dbde6c:938f4cce:4bf157ef:2d475d90
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
[root@szm Desktop]# mdadm -C /dev/md1 -a yes -l1 -n2 -x1 /dev/sdb6 /dev/sdb7 /dev/sdb8
RAID1,并添加一个热备磁盘为/dev/sdb8 |
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
Continue creating array? (y/n) y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@szm Desktop]# watch -n1 'cat /proc/mdstat'
Every 1.0s: cat /proc/mdstat Mon Mar 11 20:56:11 2013
Personalities : [raid0] [raid1]
md1 : active raid1
sdb8[2](S) sdb7[1] sdb6[0]
-------热备sdb8 16000 blocks super 1.2 [2/2] [UU]
md127 : active raid0 sdb2[1] sdb1[0]
320512 blocks super 1.2 512k chunks
[root@szm Desktop]# mdadm -D /dev/md1
Creation Time : Mon Mar 11 20:54:16 2013
Array Size : 16000 (15.63 MiB 16.38 MB) Used Dev Size : 16000 (15.63 MiB 16.38 MB)
Persistence : Superblock is persistent
Update Time : Mon Mar 11 20:54:19 2013
Name : szm:1 (local to host szm)
UUID : e1678f4c:da866bb9:d31def1c:6d065256
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
1 8 23 1 active sync /dev/sdb7
[root@szm Desktop]# mdadm -C /dev/md2 -a yes -l5 -n3 /dev/sdb9 /dev/sdb10 /dev/sdb11
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
[root@szm Desktop]# mdadm -D /dev/md2
Creation Time : Mon Mar 11 21:07:18 2013
Array Size : 31744 (31.01 MiB 32.51 MB) Used Dev Size : 15872 (15.50 MiB 16.25 MB)
Persistence : Superblock is persistent
Update Time : Mon Mar 11 21:07:21 2013
Name : szm:2 (local to host szm)
UUID : 20d48ed4:df7e95f5:8910c556:b6a1eaa5
Number Major Minor RaidDevice State
0 8 25 0 active sync /dev/sdb9
1 8 26 1 active sync /dev/sdb10
3 8 27 2 active sync /dev/sdb11
[root@szm Desktop]# watch -n1 'cat /proc/mdstat'
Every 1.0s: cat /proc/mdstat Mon Mar 11 21:09:03 2013
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdb11[3] sdb10[1] sdb9[0]
31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
md126 : active raid0 sdb1[0] sdb2[1]
320512 blocks super 1.2 512k chunks
md127 : active (auto-read-only) raid1 sdb7[1] sdb8[2](S) sdb6[0]
16000 blocks super 1.2 [2/2] [UU]
[root@szm mnt]# mkfs.ext3 /dev/md126
[root@szm mnt]# mkfs.ext3 /dev/md127
[root@szm mnt]# mkfs.ext3 /dev/md2
[root@szm mnt]# mount /dev/md126 /mnt/md0/
[root@szm mnt]# mount /dev/md127 /mnt/md1
[root@szm mnt]# mount /dev/md2 /mnt/md2
[root@szm mnt]# mount | grep -i md
/dev/md126 on /mnt/md0 type ext3 (rw)
/dev/md127 on /mnt/md1 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
[root@szm test]# ll RAIDtest -h
-rw-r--r--. 1 root root 10M Mar 11 21:16 RAIDtest
[root@szm test]# cp RAIDtest /mnt/md0
[root@szm test]# cp RAIDtest /mnt/md1
[root@szm test]# cp RAIDtest /mnt/md2
[root@szm test]# df -h | grep md
/dev/md126 304M 21M 268M 7% /mnt/md0
/dev/md127 16M 12M 3.2M 78% /mnt/md1
/dev/md2 31M 12M 18M 41% /mnt/md2
1.生成阵列配置文件2.添加阵列故障邮件地址为Root; |
[root@szm test]# mdadm --examine --scan > /etc/mdadm.conf
[root@szm test]# echo "MAILADDR root" >> /etc/mdadm.conf
[root@szm test]# cat /etc/mdadm.conf
ARRAY /dev/md/0 metadata=1.2 UUID=40dbde6c:938f4cce:4bf157ef:2d475d90 name=szm:0
ARRAY /dev/md/1 metadata=1.2 UUID=e1678f4c:da866bb9:d31def1c:6d065256 name=szm:1
ARRAY /dev/md/2 metadata=1.2 UUID=20d48ed4:df7e95f5:8910c556:b6a1eaa5 name=szm:2
[root@szm test]# mdadm /dev/md127 -f /dev/sdb8 -r /dev/sdb8
mdadm: set /dev/sdb8 faulty in /dev/md127
mdadm: hot removed /dev/sdb8 from /dev/md127
[root@szm test]# watch -n1 'cat /proc/mdstat'
Every 1.0s: cat /proc/mdstat Mon Mar 11 21:27:37 2013
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdb11[3] sdb10[1] sdb9[0]
31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
md126 : active raid0 sdb1[0] sdb2[1]
320512 blocks super 1.2 512k chunks
md127 : active raid1 sdb7[1] sdb6[0]
16000 blocks super 1.2 [2/2]
[UU]-------------已经修复了 [root@szm test]# mdadm -D /dev/md127
Creation Time : Mon Mar 11 20:54:16 2013
Array Size : 16000 (15.63 MiB 16.38 MB)
Used Dev Size : 16000 (15.63 MiB 16.38 MB)
Persistence : Superblock is persistent
Update Time : Mon Mar 11 21:25:32 2013
Name : szm:1 (local to host szm)
UUID : e1678f4c:da866bb9:d31def1c:6d065256
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
1 8 23 1 active sync /dev/sdb7
[root@szm test]# mdadm /dev/md2 -f /dev/sdb11 -r /dev/sdb11
mdadm: set /dev/sdb11 faulty in /dev/md2
mdadm: hot removed /dev/sdb11 from /dev/md2
[root@szm test]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdb10[1] sdb9[0]
31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
md126 : active raid0 sdb1[0] sdb2[1]
320512 blocks super 1.2 512k chunks
md127 : active raid1 sdb7[1] sdb6[0]
16000 blocks super 1.2 [2/2] [UU]
[root@szm test]# mdadm -D /dev/md2
Creation Time : Mon Mar 11 21:07:18 2013
Array Size : 31744 (31.01 MiB 32.51 MB)
Used Dev Size : 15872 (15.50 MiB 16.25 MB)
Persistence : Superblock is persistent
Update Time : Mon Mar 11 21:29:52 2013
Name : szm:2 (local to host szm)
UUID : 20d48ed4:df7e95f5:8910c556:b6a1eaa5
Number Major Minor RaidDevice State
0 8 25 0 active sync /dev/sdb9
1 8 26 1 active sync /dev/sdb10
[root@szm test]# mdadm /dev/md2 -a /dev/sdb11
[root@szm test]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdb11[3] sdb10[1] sdb9[0] 31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] md126 : active raid0 sdb1[0] sdb2[1]
320512 blocks super 1.2 512k chunks
md127 : active raid1 sdb7[1] sdb6[0]
16000 blocks super 1.2 [2/2] [UU]
[root@szm test]# mdadm -D /dev/md2
Creation Time : Mon Mar 11 21:07:18 2013
Array Size : 31744 (31.01 MiB 32.51 MB)
Used Dev Size : 15872 (15.50 MiB 16.25 MB)
Persistence : Superblock is persistent
Update Time : Mon Mar 11 21:32:18 2013
Name : szm:2 (local to host szm)
UUID : 20d48ed4:df7e95f5:8910c556:b6a1eaa5
Number Major Minor RaidDevice State
0 8 25 0 active sync /dev/sdb9
1 8 26 1 active sync /dev/sdb10
3 8 27 2 active sync /dev/sdb11
[root@szm test]# /etc/init.d/mdmonitor restart
Killing mdmonitor: [ OK ]
Starting mdmon: [ OK ]
Starting mdmonitor: [ OK ]
N 29 mdadm monitoring Mon Mar 11 21:40 35/1106 "FailSpare event on /dev/md2:szm"
From root@szm.localdomain Mon Mar 11 21:40:34 2013
Return-Path: <root@szm.localdomain>
Delivered-To: root@szm.localdomain
From: mdadm monitoring <root@szm.localdomain>
Subject: FailSpare event on /dev/md2:szm
Date: Mon, 11 Mar 2013 21:40:34 +0800 (CST)
This is an automatically generated mail message from mdadm
A FailSpare event had been detected on md device /dev/md2.
It could be related to component device /dev/sdb11.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sdb10[1] sdb9[0]
31744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] md126 : active raid0 sdb1[0] sdb2[1]
320512 blocks super 1.2 512k chunks
md127 : active raid1 sdb7[1] sdb6[0]
16000 blocks super 1.2 [2/2] [UU]
[root@szm ~]# umount /mnt/md0
[root@szm ~]# mdadm -S /dev/md126
[root@szm ~]# cat /etc/mdadm.conf ------------删除对应内容