I'm running into problems setting up a software raid5 array using mdadm tool with 3 scsi drives. Two of the drives are marked failed almost immediately after the array's creation (mdadm --create etc.). The array is then unusable (as raid 5 requires a min of 3 devices); i can't even format it with mke2fs fast enough before this happens.
I *think* I traced this problem down to bad sectors on those drives. I used dd to try to zero out the drives, and got I/O errors after a couple min. on both of the drives that kept getting dropped out of the array. I got NO i/o errors on the drive that was NOT kicked out - untill the device filled up in a couple hours.
So, i've been reading about the "badblocks" utility - but this tool seems relevant only on ex2 formatted partitions, and invividual drives. Wouldn't make sence to run it on a raid device...
Is there a utility to check for bad sectors/blocks on a "Linux raid auto" partition, which raid participants have to be?
What can i do if i'm able to check those devices and I actually find bad sectors? If they were standalone ex2 formatted drives, i know mke2fs can use badblocks to "mask" them out. But these need to be in a raid...
thanks for any help,