Forums

Storage

Storage
Information and ideas on Dell storage solutions, including DAS, NAS, SAN and backup.

PowerVault MD1000 SATA RAID-5 Read Performance

  • Hi

    I have recently purchased a MD1000 loaded with 500gb 15x SATA-II disks with NCQ. It's connected to a PERC5/E in a PE2950 (2x 5160 Xeon; 16GB RAM).

    Currently it is configured as one RAID-5 array (with global hot spare).

    The application is a popular HTTP file server, and it is almost exclusively reads. We're maxing out at around 20-24MB/s read performance from the RAID-5. The files we're serving range from 1KB text files up to large 500-600MB ISO images. The small files are served quickly while the large files start out ok but slow down, eventually to a crawl.

    I'm wondering what I can do to try and boost this performance.

    So far, I've used the default stripe size (64kB?).

    The filesystem on top is XFS (blocksize is 4kB) and it is mounted -o noatime,nodiratime. This filesystem is on top of an LVM logical volume, from which I understand the performance hit should be negligible.

    Here's what I have thought of trying so far:

    - Increasing the XFS block size to 64kB (to match the stripe size)
    - Increasing the XFS block size and stripe size to 128kB
    - Not using LVM
    - Trying different cache policies
    - Looking to see if there are any settings relating to SATA-II NCQ that can be tweaked
    - Looking into direct i/o under Linux[1]

    If anyone more experienced with storage can offer some insight it would be greatly appreciated.

    Cheers

    [1]I'm not sure this is even appropriate here, but I read an article (http://developers.sun.com/solaris/articles/tuning_for_streaming.html) on Solaris UFS where sun recommended mounting with the 'directio' option when dealing with large files.
  • Hi
     
    Did you manage to resolve this performance issue, we are looking at buying an MD1000 in a very similar configuration.
     
    Has anyone else measured the performance of the MD1000 with a large SATA array?
     
    Thanks
     
  • The biggest thing you can do to improve the speed is to either use GPT disks (2003 SP1 or later) or use diskpart to align your partition. We found a huge increase in throughput when the partition was aligned properly on Basic MBR disks. For 64k and 128k IOs, there was a doubling of IO/s for the same RAID array just by aligning.
     
    We ran some SQLIO tests against the MD1000 in single mode and 2, 7-drive RAID 5 arrays. I've heard comments before about larger RAID5 stripes taking too long to rebuild if a drive fails so we went with 2 arrays rather than a single 14-disk array.
     
    Here is the detail. Interestingly, the RAID 5 was faster than the RAID 10 for write even though there were half as many spindles! After seeing that we've decided to test all of the combinations as our assumption that RAID10 would be faster was wrong.
     
    Sorry about the formatting.
     
    RAID10 was a 14 drive RAID 10 array
    RAID5 was a 7 drive RAID5 array.
    Both were aligned with diskpart

                  RAID10 RAID5 Ratio
    64K Write  IO/s 3726 4460 1.20
    64K Write MB/s 232 278 1.20
    128K Write IO/s 1852 2163 1.17
    128K Write MB/s 231 270 1.17
    256K Write IO/s 926 1029 1.11
    256K Write MB/s 231 257 1.11
    64K Read IO/s 6483 5250 0.81
    64K Read MB/s 405 328 0.81
    128K Read IO/s 3256 2624 0.81
    128K Read MB/s 407 328 0.81
    256K Read IO/s 1916 1317 0.69
    256K Read MB/s 479 329 0.69
  • kuypah,
     
    what size/type of disks di you carry out these tests with? 250/500GB SATA?
     
    Thanks
     
  • 500GB SATA. MD1000 was in Unified mode. Disks were GPT, formatted NTFS with default allocation. We used GPT to get above the 2TB level. RAID Stripe was default size (64k I think). No read-ahead, right back.
     
     
  • I've got a very simmilar config that I'm troubleshooting now.  I'm only getting 30MB/sec writes and 20MB/sec reads from the array and I can't seem to figure out why.
     
    500GB SATAs on a unified MD1000 to PERC5/E in a PE2950 w/ 2 x Dual Core Processors and 4GB RAM running Windows 2003 SP1
    1 GPT Partition formatted NTFS 64K w/ 128KB Stripe Size, Adaptive Read Ahead, Write Back
     
    I've got 3 identical configurations and all three have the same issue.  I've used several tools to test and all have given me the same poor performance.  I've got a case open w/ Dell tech support now.  The only thing I can come up with is that I diddn't configure the controller or the volume correctly.  I've tried several different configurations, stripe sizes, read ahead off etc and still no dice.  I've got to be doing somethign wrong but I don't knwo what it is.
  • I would turn off Read-Ahead. I didn't get any performance gains turning it on. If you can blow it away, try creating different RAID types (1,5,10) with different stripe sizes to see if there's any difference. You may also want to check to make sure the array isn't initializing in the background. There's definitely something weird going on if you're only getting 30MB/s. You should get more than that with a single drive.
     
    What are you going to use the server for?
     
    D
  • The server is going to be backup to disk, so performance is imperitive.  I'm still implementing so blowing away is an option.  I copied some data to the array, then tried to stream it to LTO3 tapes (Fiber Channel) and that was slow, in the 20-30MB/sec range.  Then I started further troubleshooting and found I coulden't do anything much faster than that.  I'm running HP's disk analysis tools, Netbackup's performance testing tools, IOMeter, Robocopy copy, BM_Disk - you name it.  Some give me slightly better performance but they are all in the same 20-30MB/sec range with slightly faster writes than reads.
     
    Quick question, which slot is your PERC5/E in?  I've got a Qlogic HBA in the single PCIe slot, and my PERC5/E is in the riser slot.  It shoulden't matter, but at this point I'm "thinking outside the box" because it just too slow.
     
    -Jonathan
  • I'm using the identical configuration to you. MD1000 with 15, 500GB disks and a QLogic card in one of the slots (top one if I remember correctly). We are getting 230-280MB/s write per 7-disk array. We created 2, 7-drive RAID 5 arrays leaving a hot-spare on the end. We tried RAID 10 and it was slower for writes. We used 2 arrays to reduce the rebuild time if a drive dies.
     
    D
  •  
    It looks like my issue is more 2950 related than MD1000.  I ran a local copy operation on the MD1000 and got the following results.
     
    Version Number:   Windows NT 5.2 (Build 3790)
    Exit Time:        6:30 pm, Wednesday, January 10 2007
    Elapsed Time:     0:09:57.156
    Process Time:     0:00:56.062
    System Calls:     7353606
    Context Switches: 3669395
    Page Faults:      12234412
    Bytes Read:       2111274965
    Bytes Written:    376288125
    Bytes Other:      1806942685
     
    The MD1000 was actually READING a lot more data than was being copied and the server was producing page faults.  I tried some of me read/write testing on the PERC5/i w/ locally attached SAS 146GB 15K Drives and got the same performance numbers.  I've escilated the ticket to the server group, and I'll let everyone know what I find.
  • any update on this? thinking of buying / attaching a MD1000 w/ 15 500GB drives to our PowerEdge 2950, so this is important info to know. 

    Message Edited by munyoki on 01-23-200703:21 AM

  • We love our setups. We have 10 or 11 2950's with MD1000's. Most are SQL boxes but one of them is a backup box with 15x500GB drives. If you want added redundancy, look at the new MD3000. I haven't used one yet, but it solves the problem of a lack of redundancy in the MD1000.
     
    D
     
  • So far I've tested numbers as high as the following (sustained)
     
    100MB/sec WRITE
    120MB/sec READ
     
    Configuration:
     
    Dell PE2950
    2 x Dual Core Xeon Processor 5060,2x2MB Cache, 3.20GHz, 1066MHz FSB, PE
    4 x 4GB 667MHz (4X1GB), Dual Ranked Fully Buffered DIMMs
    PV MD1000 15 x 500GB SATAII Drives
     
    Disk Config:
    14 Disk RAID5 w/ 64KB Stipe Size, Adaptive Read Ahead, Write Back
    Windows 2003 SP1, Disk Type: GPT - Formatted NTFS w/ 64KB Cluster Size
     
    Application Config:
    Symantec Netbackup 6.0 MP4
    DISK_BUFFERS_NUM 64
     

    Message Edited by jmarti05 on 01-23-200709:13 AM

  • the MD3000 only uses SAS drives which are out of our budget. for redundaacy we will use our current 2 PV220s, tape or use the other half of the unit (7 drives)

    Message Edited by munyoki on 01-23-200712:50 PM

  • Anyone ever try running the MD1000 on CentOS 4 with LVM?  We're getting some pretty dislikeable performance too.