PDA

View Full Version : super poor FS performance



capo
29th December 2006, 02:22 AM
Ok so i have a pretty powerfull computer:

Core 2 e6400 @ 3.5ghz (under water)
7900GS
gigabyte DS3
2gb Of G.Skill HZ 876mhz 4-4-4--12

and 3 160GB seagate sata HD's.

So i tried to install FC6 but hte partioner would not work for me (kept on chaning partion orders etc.) So i used the recovery mode to manualy set up partitions with fdisk. I wanted to use mdadm to build some raids at that point but the raid data seems to be stored somewhere in the local FS (naturaly) and could hence not be set up 'externally' becuase the installer would just see the HD's as seperate anyway.

So i decided to just use the installer GUI partioner to set up the raid. I assume this is just a front end of mdadm anyway... BIG MISTAKE.
First things first even though i selected raid 5 for md2.. it ends up as a raid 0. Here is my mdadm -D

/dev/md2:
Version : 00.90.03
Creation Time : Wed Dec 27 09:17:46 2006
Raid Level : raid0
Array Size : 466117632 (444.52 GiB 477.30 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Wed Dec 27 09:17:46 2006
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Chunk Size : 256K

UUID : d4c5e94b:97cc994a:ca51965b:13257f9b
Events : 0.1

Number Major Minor RaidDevice State
0 3 3 0 active sync /dev/hda3
1 22 3 1 active sync /dev/hdc3
2 8 3 2 active sync /dev/sda3

My others raids are my /boot, which is raid 1:

/dev/md0:
Version : 00.90.03
Creation Time : Wed Dec 27 09:38:42 2006
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Device Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Dec 28 10:11:53 2006
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

UUID : 97948e23:166345c7:8fb381da:9401cfec
Events : 0.4

Number Major Minor RaidDevice State
0 3 1 0 active sync /dev/hda1
1 22 1 1 active sync /dev/hdc1
2 8 1 2 active sync /dev/sda1


\and my swap space, which is meant ot be raid 0

/dev/md1:
Version : 00.90.03
Creation Time : Wed Dec 27 09:17:46 2006
Raid Level : raid0
Array Size : 2433024 (2.32 GiB 2.49 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Wed Dec 27 09:17:46 2006
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Chunk Size : 256K

UUID : 3f687292:445d6a02:26d54114:5f78b63c
Events : 0.1

Number Major Minor RaidDevice State
0 3 2 0 active sync /dev/hda2
1 22 2 1 active sync /dev/hdc2
2 8 2 2 active sync /dev/sda2

Now md1 could be working, or not, i dont know because my swap is yet to actualy be in use anyway with 2gb of ram, but i can garantee that md2 is not functioning properly simply because FS performance is horrible! My computer is performing so badly that even typing a forum post the actualy text appears 2 seconds after the keystroke. This problem is also appears to be much worse when the HD's are under use, by something like a file scan. (what led me to belive this is an FS problem.)

So my questions are:

A. why is what should be very fast, a raid 0, running VERY slowly?
B. Is there any way (using mdadm) to change the raid level to 5? because this was nevre meant to be a raid 0, and having raid0 for my data seems very dangerous.

Chris.

capo
29th December 2006, 04:03 AM
Ok there is definatly something wrong with my FS:

One app for example normaly uses about 50% cpu load on one cpu when executing.
When running while outputting all syscalls to a log file my system come to a complete halt as both cpu cores are loaded 100% and nothing get done anyway, been lunching for about 20 minutes. Something must be screwed up in my raid and causing it to perform millions of parity calculations or something?

capo
29th December 2006, 08:23 AM
ok well i tried reformatting and am currently instaling... Have been installing for about 4 hours, when it used to take about 30 minutes, so something is obviously wrong. But when i used seagate disk tools it told me that all of my HD's are perfectly fine, so it's not a failed disk.

capo
29th December 2006, 11:09 AM
Ok so after reinstalling md2 has turned up as a raid 5 this time, but is degrading with the third disk appearing as a spare rather than a member, and is now rebuilding. While rebuilding the computer is running realy REALY slowly. Is rebuilding normaly bed enough to make the system virtualy unusable. Also how long should this take for 3 160gb disks, been going for hours now and only 8% done..

leigh123linux
29th December 2006, 12:05 PM
If you are running overclocked , Try locking the PCI bus speed and any other bus speed you can find.
And those memory setting look high , perhaps running memtest overnight might be a good idea.

capo
29th December 2006, 02:26 PM
bus'es are locked and i have already memtested and Prime 95'd over night... and i am back @ stock now because i already thought it could be some bus issue.

capo
31st December 2006, 06:31 AM
Ok so i finished the rebuild and it took abot 20 something hours.. It should not take that long. All the same allthough my mdadm -D /dev/md2 looks like this now:

[root@localhost ~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Thu Dec 28 14:38:20 2006
Raid Level : raid5
Array Size : 310745088 (296.35 GiB 318.20 GB)
Device Size : 155372544 (148.17 GiB 159.10 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Sat Dec 30 04:15:13 2006
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 256K

UUID : 3bf27faa:f704f06e:7d10bc69:4bd48df6
Events : 0.15952

Number Major Minor RaidDevice State
0 3 3 0 active sync /dev/hda3
1 22 3 1 active sync /dev/hdc3
2 8 3 2 active sync /dev/sda3


My performance is worse than ever! This seems to be running slower than red hat 5 did on my p1 200mhz!

Can anyone suggest a hard disk benchmark tool, and post thier own results as a comparison. Or does anyone have any idea why my FS would be so damn slow when the raid appears to be functioning fine??

capo
1st January 2007, 11:51 AM
This issue is making my computer totaly un useable and i cannot see any reason for such poor performance. Is anyone else using fedora core on a raid??

Enigma 2100
2nd January 2007, 12:58 AM
To start you should install onto a single drive first and not overclocked in any way until you get it running right. Once you get it running right you can create a RAID 0 or 5 or whatever and copy the data across. I have heard there are issues with either Linux in general or Fedora installing to RAID.

I have fedora 6 installed to a SCSI U160. I have a RAID 0 with XP Pro on and split into 3 partitions. It was a job getting access to one of the ntfs partitions after I had FC6 running. I can access the partitions on my RAID now and performance is fine but I only have it set to read only due to the ntfs issues.

capo
2nd January 2007, 03:54 AM
But how can u set up a raid after i install on one of the disks? I can use the other two in a raid, but how do i add the third disk in, which i currently have mounted?

capo
2nd January 2007, 10:11 AM
Ok so i found a FS benchmarking tool: tiotest

it can be downloaded here:
http://sourceforge.net/project/showfiles.php?group_id=3857

Can someone else with or without a raid PLEASE dl it and post thier results. Mine are:

Tiotest results for 4 concurrent io threads:
,----------------------------------------------------------------------.
| Item | Time | Rate | Usr CPU | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write 40 MBs | 12.2 s | 3.272 MB/s | 0.3 % | 4.5 % |
| Random Write 16 MBs | 9.3 s | 1.675 MB/s | 0.0 % | 4.9 % |
| Read 40 MBs | 0.0 s | 2397.507 MB/s | 65.9 % | 191.8 % |
| Random Read 16 MBs | 0.0 s | 1822.583 MB/s | 0.0 % | 151.6 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write | 0.142 ms | 1256.631 ms | 0.00000 | 0.00000 |
| Random Write | 0.012 ms | 4.924 ms | 0.00000 | 0.00000 |
| Read | 0.002 ms | 0.569 ms | 0.00000 | 0.00000 |
| Random Read | 0.002 ms | 0.013 ms | 0.00000 | 0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total | 0.054 ms | 1256.631 ms | 0.00000 | 0.00000 |
`--------------+-----------------+-----------------+----------+-----------'


Which seem odd, the read speeds are way to fast, the write speeds way to slow and the cpu usage while reading HUGE.

Enigma 2100
3rd January 2007, 02:20 AM
I can only test my SCSI U160 as my RAID is ntfs and read only. I have a std ide but it is also ntfs read only
.
[enigma@localhost tiobench-0.3.3]$ tiotest -t 4
Tiotest results for 4 concurrent io threads:
,----------------------------------------------------------------------.
| Item | Time | Rate | Usr CPU | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write 40 MBs | 3.6 s | 11.072 MB/s | 0.4 % | 13.4 % |
| Random Write 16 MBs | 3.9 s | 3.957 MB/s | 0.1 % | 2.6 % |
| Read 40 MBs | 0.0 s | 1069.490 MB/s | 32.1 % | 101.6 % |
| Random Read 16 MBs | 0.0 s | 954.607 MB/s | 12.2 % | 110.0 % |
`----------------------------------------------------------------------'
Tiotest latency results:
,-------------------------------------------------------------------------.
| Item | Average latency | Maximum latency | % >2 sec | % >10 sec |
+--------------+-----------------+-----------------+----------+-----------+
| Write | 0.044 ms | 11.659 ms | 0.00000 | 0.00000 |
| Random Write | 0.014 ms | 4.860 ms | 0.00000 | 0.00000 |
| Read | 0.006 ms | 0.040 ms | 0.00000 | 0.00000 |
| Random Read | 0.006 ms | 0.073 ms | 0.00000 | 0.00000 |
|--------------+-----------------+-----------------+----------+-----------|
| Total | 0.021 ms | 11.659 ms | 0.00000 | 0.00000 |
`--------------+-----------------+-----------------+----------+-----------'