Ask Your Question
3

Can't rebuild software RAID on FC16

asked 2012-04-12 11:31:19 -0500

sudoyang gravatar image

updated 2013-06-04 03:16:00 -0500

I have no idea how the system got into this state but I cannot rebuild my software raid-1 on FC16. Every time I tried to it kept saying device is busy.

# cat /proc/mdstat
Personalities : [raid1] 
md127 : active raid1 sdb1[3]
      1953510841 blocks super 1.2 [2/1] [U_]

unused devices: <none>
# dmsetup ls
vg_fonghome-lv_swap (253, 0)
vg_fonghome-lv_root (253, 1)
35000c50032f567e8p1 (253, 3)
vg_fonghome-lv_home (253, 4)
35000c50032f567e8   (253, 2)
# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Fri May 13 16:16:53 2011
     Raid Level : raid1
     Array Size : 1953510841 (1863.01 GiB 2000.40 GB)
  Used Dev Size : 1953510841 (1863.01 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Apr 12 09:26:22 2012
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : fong-home-main:0
           UUID : bf787d84:9ef0f184:dd803853:6323dba1
         Events : 309954

    Number   Major   Minor   RaidDevice State
       3       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed
# df -h
Filesystem                       Size  Used Avail Use% Mounted on
rootfs                            50G   18G   31G  36% /
devtmpfs                         3.9G  4.0K  3.9G   1% /dev
tmpfs                            3.9G  564K  3.9G   1% /dev/shm
/dev/mapper/vg_fonghome-lv_root   50G   18G   31G  36% /
tmpfs                            3.9G   49M  3.9G   2% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G     0  3.9G   0% /media
/dev/md127                       1.9T  727G  1.1T  42% /data2
# mdadm  --manage --add /dev/md127 /dev/sdc1
mdadm: Cannot open /dev/sdc1: Device or resource busy

My research shows that there tends to be a conflict between LVM and software raid on init scripts and to remove software raid. Obviously, I can't do that because I'm using both.

I tried taking the 2nd drive (no longer part of the raid-1 unit) to another server and wiped it clean there by filling it with zero but I'm getting the same error.

I've gone through many reboots already. It was working fine for the last week months. I don't know why it's doing it now.

Thanks for any help you can provide.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2012-06-01 14:54:59 -0500

Nwildner gravatar image

I would try the following:

1 - Mark the disk as faulty, and remove it

mdadm --manage /dev/md127 --fail /dev/sdc1
mdadm --manage /dev/md127 --remove /dev/sdc1

2 - Zero the problematic disk

mdadm --zero-superblock /dev/sdc1

3 - Try to add again the partition to the raid 1

mdadm -a /dev/md127 /dev/sdc1

4 - IF that works, rescan the configuration. Be carefull to erase the last lines of the mdadm.conf before proceed with this "scan-and-create-config" procedure:

mdadm --examine --scan >> /etc/mdadm/mdadm.conf
mdadm --auto-detect

This last step will just work if you have formated the partitions as "raid-autodetect".

Hope it works :)

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2012-04-12 11:31:19 -0500

Seen: 1,019 times

Last updated: Jun 04 '13