I have a SQL database running on Dell PowerEdge R910. I use PCIe SSD drive to host the database files. I'm running out of space and considered several options including 3.2TB OCZ Z Drive R4 and 2.4TB Fusion-io ioDrive2 duo, but none of will work with this server (doesn't accept full length cards and doesn't supply more than 25W to a slot).
Currently I'm thinking about buying Dell PowerVault MD1220 and filling it with SATA or SAS SSDs and connecting to something like H810 (which, I understand, uses the latest LSI 2208 chip like LSI MegaRAID SAS 9285-8e).
Has anyone used MD1220 with SSDs other than those sold and supported by Dell? What is your experience with reliability and performance?
The drives that are dell certified are enterprise drives that have a different firmware loaded on them to work with the MD series. So if you are to use a 3rd party drive in the MD it will spin up & look for the Firmware to ensure that it is the correct type of drive if it doesn’t find it then the drive is powered back off. Sometimes u can get lucky and get one to work but once again if the drives are not on the supported list then the drive will not work in the MD.
Dell | Social Outreach Services - Enterprise
Download the Dell Quick Resource Locator app today to access PowerEdge support content on your mobile device! (iOS, Android, Windows)
Get Support on Twitter @DellCaresPro
For the MD1220 the SAS SSD‘s will be faster than the sata drives. With having an SQL database files that will be running you should see better performance. It also would depend on which raid that you set as well to get the best speed. Let us know if you have any other questions.
Also if your going to get the drive from a 3rd party then need to make sure that the drives are dell certified drives that are supported by the MD series. Here is a link to the drive matrix and if you look on pg. 7 is the list of all supported drive for the MD series.
Let us know if you have any other questions.
You say that SAS SSDs will be faster than SATA SSDs - how come? When it comes to "normal" drives the "near-line" 7.2k SAS drives are usually the same drive as SATA version, just with different board attached. In case of SSDs, it is more complicated as I don't know of any drive that would be sold both as SAS and SATA, perhaps with the exception of Deneva / Talon series from OCZ - they are both using the same SandForce controller and according to their specification the SATA version is actually faster (80k IOPS on 4k writes vs 35k for SAS). Anyway - neither of them are supported according to the link you've sent me.
By the way - is Dell actively preventing unsupported drives from working in that enclosure or is it just not supported as in Dell can't guarantee that they will work well or at all?
The SAS interface is faster than the Sata interface. Where SATA can only communicate on one port the SAS drive will communicate on two ports.
Also the DELL SSD provided for MD1220 is about 3x faster than a "regular" SSD.
You are never really going to do 4k writes. if you look at 8k writes the numbers drop drastically.
Do you know how big your writes are?
Also have you looked into Cachecade SSD for the R910? effectively enlarging the cache of the raid controller?
First of all - the SQL server my R910 is running is most likely doing 99% read and 1% write, so I don't care about writes at all. It is random reads that I'm worried about and that is why I need SSDs. Writes can be queued and cached which make them much less random. You can't do much about reads.
Actually according to Task Manager it is actually 65/35, but most of these writes are either when I add more data to the system (which is not when users access it, and it is fast enough for me) or during maintenance (backup).
I get that the theoretical linear speed of SAS is higher. But you only achieve this with linear, large block transfers.
I need SSDs to do small block random read so I will never get anywhere near the maximum bandwidth of the interface. Actually - that's not true. I do - during backup - it is doing 600GB in 40 minutes and the only reason it is not faster is that it goes to "spindles" rather than SSDs.
I thought about Cachecade, but my problem is that instead of one big database that has small "hot points", I have several smaller databases that are in constant use and they add up to more than 512GB.
But you are right - I should first check what is the size of my block. It probably is at least 64k, because it is MS SQL and perhaps even more, because it is running on 2008 R2, which likes to read 1 / 2 MB chunks. Do you know any better tool for that than built-in Performance Counters (Average Disk Bytes / read)?
How big are your databases?you say you backup 600Gb in 40 minutes. si that the totalsize? if so why not only go for internal SSD's?
Dell has a tool called dpack that you can run on your system, then send the iokit file it generates to your Dell rep and they can give you a detailed report of your Disks performance..
The total size of my MDF is about 600 - 700, but it is growing fast. My internal PCIe SSD is 1.2TB and I can't find anything bigger that would fit in the R910. I'm sorry to say that, but this server really sucks. It is only good at having lots of RAM in it - it can go up to 1TB, but it doesn't accept full length PCIe slots and it doesn't give more than 25W of power to PCIe slots. I wish R720 was available when I was buying it as it has 4 full length slots so I could have 6.4TB mirrored in 4x 3.2TB Z-Drive R4.
I still have 8 internal slots free so if I want to mirror them, I could make an array of 4 drives. Problem is, apart Talos 2 drives, that apparently top at 800GB, nobody else sells more than 400GB per drive and that is 1.6TB, which is not really enough. Alternatively I could buy an enclosure, put my internal drives into it and have 16 SSDs in. But I have H700, which is the old, slower, LSI chip, not the H710P which is the amazing LSISA2208 with 500k IOPS when running fastpath. But then, I will probably not get anywhere near 200k IOPS anyway...
if i do not remember wrongly. Fusion-io has 5.4 TB Pci-SSD cards
should give you approx 1 million io pr card...
Yes, I know, Octal. It is about 50k, but the real problem is that it is full length - won't fit into R910.
I was considering their 2.4TB ioDrive2 duo, but it draws 55W and R910 supports only 25W per slot :(
But the R910 has the power tap to provide PCIe external power to the 2.4TB Duo. The Duo comes with the power cable to use this tap. I know all this, because I own a R810 and a 2.4TB Duo and I am not a happy camper. Not happy, because I found out after the fact that the R910 is the only R series server that has the power tap - so you can use the 2.4TB Duo at full performance. I cannot :(
Thank you very much for this!
It seems Dell people in the UK don't know their products. I was told only the 1.2TB card was supported in R910.
This PowerPoint is a bit outdated and doesn't mention ioDrive2 Duo, only ioDrive Duo so the max capacity is 1.2TB anyway. I've tried to contact one of the people, who according to the presentation is responsible for UK, to ask about the ioDrive2 Duo, but the email bounced - I need to try again, maybe with some US people.
This connector is not well documented. But here is a picture from the R910 Technical Guide. You can see it in an unrelated photo on page 27:http://www.dell.com/downloads/global/products/pedge/en/poweredge-r910-technical-guide.pdf
You will need a special cable or adapter other than the one included with the FusionIO card to tap this power socket.