I have a number of Dell 1950 servers updated to lates firmware using
Caviar Black 64mb Cache hard disks Each hard disk should have a
benchmark of about 150m/sec min stand alone but with the dell server i
get a maximum of 128m/s from 2 drives in raid 0 when this figure should
be more like 300mb/s can any one help The O/S i am using is centos 5.5,
is there special drivers i need to load for the raid cards that will
speed this up or is it just my hardware? Please also if some one can
tell me where i can get a good benchmarking tool i can check the drives
out from centos rather than booting into windows 7 to give me a accurate
bench mark instaed.
The Dell PERC controllers could be fine tuned for Dell firmwares on the drives. You may want to try with a pair of Dell drives that are specifically for PowerEdge servers (or Powervault systems).
Member since 2003
"2 drives in raid 0 when this figure should be more like 300mb/s"
In reality, just because you combine two drives to act as one as a raid 0, does not mean you will come anywhere near to doubling the throughput. 128 sounds reasonable for throughput, maybe a little low for the drives, 150 for a single drive is high. If drives double throughput in raid, no one would sell a workstation/desktop without raid
Check out this link of raid benchmarks, Dutch but benchmarks are in English.
Thanks for the reply's i will be doing some testing tonight but i do think i may have found the problem.
I understand that sata drives use single channel coms but the backplain uses SAS duel channel coms, this causes a slowdown in the drive read write etc.
By adding the interposer PN/939 you can affectivly make the SAS backplane and card comunicate directly with the SATA drive in duel channel mode this in turn should increase performance.
I have been readng many posts and studdying this hard and this seems to be the only factor.
Many people do not notice the speed decrease unless the servers are used for virtual servers etc where many people are accessing the disk and the poor controller fills up with many descrepencies, thus slowing it down.
I will post my results here as soon as i have completed my tests. (I really am hoping this is the main cause)
OK here are the result for a single drive using the 1950III server with a PERC5i Raid card with 256 Ram
Settings writeback and read ahead enabled. Software using windows 7 with HD Tune
Caviar Black Drive used 1 terra 64mb cache 7200 speed SATA 3
I then tested it on a Samsung 1Terra drive 7200 (HD103SJ) 32mb cache sata 2
As you can see the SATA3 drive seems to increase its performance but the SATA2 is not affected
SAS 300gb Cheeta 10K drive as a standard (seagate)
Also looking at these benchmarks the SAS drive seems now slow in comparrison to the SATA2 and SATA3 drives.
I will be testing the same setup again with raid 0
But it looks as so far the only benefit from the interposer is when using a SATA3 drive but these are just my tests.
ok here are the Raid 0 results for the same drives as above with all the same settings.
Raid 0 without interposer
ok here are the same setting and drives this time with interposeres fitted.
As you can see the performance does improve a little with SATA 2 drives but increases a lot with the SATA3 drives
Thing is though these are still very low compared with a standard desktop PC with a Q9660 Processor and standard Mbaord 6gb Ram and onboard raid
Here are the result i would expect to them to at least equal with it been a hardware raid card.
Raid 0 Using Caviar Black Drives and windows 7 128 stripe.
Access Time (ms)
As you can see the desktop PC outperforms the server nearly twice as much and this is using the onboard raid.
I think there if def something wrong with the Perc 5i and SAS5i Raid cards they just can not perform like they should.
If anyone can help me out then this would be great.