We have just purchased a new R710 server with perc H700 raid controller. We have 4 50GB SSD drives in this server. The drives are re-badged Samsung drives. Confirmed by dell pro support to be this model http://www.samsung.com/global/business/semiconductor/products/SSD/Products_Enterprise_SSD.html
As you can see from the specs on this page the drives performance is quite poor
read 230 MB/s
write 180 MB/s
Why has dell gone for such poor drives for their servers, the performance of these drives would be barely better than 15k SAS drives. Dell go on by saying their enterprise class drives use SLC flash rather than MLC flash because SLC lasts longer.
yet if you look at the MTBF of the above Samsung drive it is rated at 2 million hours MTBF. To compare this to a OCZ vertex 3 drive that uses MLC flash, the MTBF is exactly the same.
How then is it better to use a SLC drive from Dell than just go out an buy a much cheaper OCZ drive. The H700 will support SAS 2.0 or SATA 3 6Gb/s. So does anyone here have any experience with doing such a thing, connecting OCZ drives to a H700 controller? (or any other sata3 SSD drive for that matter)
MLC memory cannot do as many writes as SLC memory. Most SSDs with MLC memory are built for home usage, which usually doesn't do as many writes as enterprise servers.
So, just playing with some numbers I found in some of the articles linked at the bottom:
SLC memory can do 100,000 writes per cell
MLC memory can do 10,000 writes per cell
Say an average home user does 2 writes to each cell each day (on a 120GB drive that would mean you write 240GB to your drive each day, which isn't going to happen), so a cell will 'die' after 5000 days. (note: this doesn't take wear leveling into account as you end up not writing to the same cell necessarily, so I'm just assuming how frequently you are actually writing to the same cell)
Say you have a high IO database that writes 50 times to each cell on a single day (why bother with SSD if you have a low IO/usage database).
With MLC memory that would mean a single cell would have an expected lifespan of ~200 days. SSDs have spare cells, but 200 days per cell will run you through your spare cells within a year.
SLC memory would have about a lifespan of about 2000 days in this case. That's about 6 years. If you then take the spare cells into account, you can expect a realistic lifespan of 7+ years. Enterprise equipment usually has a functional lifespan of 5 years, so 7+ years for the drive is acceptable.
So far I think there's only 1 6Gbit/s controller on the market that supports SLC memory (controllers have to specifically support SLC memory); Sandforce's SF-2500/2600 model.
Sources (gotten via Google):
Member since 2003
I don't have experience with enterprise class SSD's (although I am an OCZ evangelist for desktop SSD's), but I think you would find even an OCZ "desktop" drive falling fall short of its potential as it is not an "enterprise" class drive ... and it is not certified for the H700. Besides poor performance, you may experience offline or other issues with the drive, as the drive will not know how to respond to the many customizable commands sent by the controller. If you want details on exactly what would cause poor performance on enterprise controller and/or desktop class drives, head over to Experts-Exchange and ask your question (30-day trial if you are not already a member). Maybe there is someone here with the experience and knowledge to give you the answer you are looking for, but there is an Expert there who works with drives, controllers, and accompanying firmware that could go into as much detail as you need.