Have been doing alot of reading (never good) and become very nervous about two issues. 1) Is it true that RAID 5 performance will be <ADMIN NOTE: Profanity removed as per TOU> because the S100 is a software RAID?? Just FYI, this is a small business (photography studio) network that does not get heavy use. Not being used 24/7, and the data is almost entirely photos. I plan to use four 2tb 7200rpm drives. As a point of reference, the server currently has two 1tb drives in a RAID 1 array giving me 50mbps reads on average. I plan to replace that with the new RAID 5 array. Any thoughts on whether or not I will see a performance gain from the current 50mbps?
Also, could use some advice concerning setting up the new array. I've read that booting in BIOS mode (as opposed to UEFI) may cause some complications with the maximum partition size of the boot partition being limited to 2tb. Any thoughts?
I plan to make an image backup of the current disc array and then restore that to the new RAID 5 array. Will this preserve the current partitions? Or will I need to arrange the new partitions in the RAID software? I very much want to avoid any issue where I will not be able to utilize the full amount of space on the new array.
The current array is set up to boot to the BIOS. Is there any great advantage or other reason why i would want to change that arrangement and boot to UEFI?
Need some help please.
RAID 5 is inheritently slow, because of the parity bits that must be generated when data is written to the drives. Hardware RAID takes all of that overhead upon itself when present; if using a software RAID solution, then the system resources are used for this, which - and especially if it is moderately used - will cause a performance degradation. RAID 1 or RAID 10 would be preferable.
Windows cannot use a partition greater than 2TB, UNLESS the "disk" is converted to GPT. Windows cannot boot to a GPT disk, UNLESS it is a 64-bit version of a Vista/2008-family OS (or later) installed on a UEFI-enabled system. So, with 2TB drives, you will either need to do two RAID 1 arrays or install a 64-bit OS on a GPT disk in UEFI mode. Higher-end controllers can "slice" the disks, creating multiple arrays across the same set of disks - one can be smaller and made to be MBR, the other can be converted to GPT and made larger.
If you leave it in BIOS mode, your only option will be to configure two separate RAID 1 arrays of 2TB each - assuming you do want to use as much disk space as possible.
The current disc does not have any partitions larger than 2tb, so if I make an image backup of that arrangement it should have no problem recovering to that to the new RAID 5 array, right? Then can I simply partition out the remaining space into 2tb chunks?
Also, could you be more specific when you say RAID 5 is slow. The current two disc RAID 1 array gives me 50mbps reads on average. Do you think a four disc RAID 5 array will be slower than that??
There should be no issues restoring a partition backup to a partition - the problem comes when you start doing partition backups to entire disks, or disk backups to partitions.
As long as you have your partitions sorted out and converted to GPT, it should just be a matter of dumping the contents of one partition to another.
Just know that if you configure a RAID 5 with all four disks in BIOS mode, Windows, at best, will recognize ONLY 2TB. If you configure a RAID 5 with all four disks in UEFI mode, then convert the "disk" (by "disk", I mean the Virtual Disk or array that is given to the OS to use, not partitions) to GPT, Windows will then be able to install and boot to that "disk" and partition it how you like.
Writes on a RAID 5 are slow, making it a bad idea for database work, compiling, encoding, etc.; reads are fast. I don't have any benchmark information for you, but reads "should" be faster than 50 whether you go RAID 5 or RAID 10 - RAID 10 writes are much faster than RAID 5.
Finally somebody with genuine knowledge! I like you very much!
Now you are getting to the real heart of my issue. Here's my question:
If the backup image was made from a system currently configured in the MBR environment (as opposed to GPT) will that backup restore to a GPT array?
Whether you can restore your backup to a GPT partition depends on the type of backup you have and the software used to restore. I would suggest asking the creator of your backup software for specifics.
Ughh! Just came across a worse problem. Apparently when RAID is enabled in the BIOS, UEFI is disabled. Just read it in the Dell T110 manual. Why would that be??
So, that takes me back to having to deal with BIOS and MBR. Will the BIOS or OS see the 8tb array and allow me to partition it into a number of 2tb or smaller partitions? I'm ok with that, but I have a big problem if the storage space is not seen by the BIOS or the OS. FYI, I'm using Windows Server 2008 R2
I've not heard that - and I can only speculate that the S100 (Dell variation of Intel's chipset-integrated RAID) is not compatible with UEFI.
Remembering that four 2TB drives in a RAID 5 will net you under 6TB of array space for your Windows "disk":
If you leave it in BIOS mode, you can't convert your disk to GPT, because Windows can't boot to it. If it is not GPT, then Windows will only be able to see 2TB of it.
In looking at the documentation for the PERC S100, it appears that you can create multiple Virtual Disks (arrays) across the same set of disks, so here are your options:
1. Configure two VD's (Virtual Disks): one large enough for your OS to be installed on, maybe 100GB; and one with the remaining space. The first will be your MBR disk so you can boot without turning on UEFI mode; the second VD (which will show as a second "disk" in Windows) you will need to convert to GPT so you can use the full 6TB (minus the math and 100GB OS space) as a secondary disk.
2. Get a better PERC controller (it kind of irks me that they call the S100 and S300 'PERC' controllers; they historically have only given that name to real/hardware RAID controllers), like the 6/i or H700.
Great information! Is it possible that I simply enable UEFI first, and then enable RAID from within UEFI?? Maybe it's just a procedural issue. I know this system can be booted UEFI. It would be really weird if RAID could not be enabled from within UEFI. I'm going to check with Dell on the compatibility issue and get back to you.
You can try it, but I would suspect something like that would be documented ... if they went to the lengths to say it would be disabled. The embedded Intel RAID may simply be a BIOS-enabled feature and not compatible with UEFI, but check.
I checked and the tech indicated that RAID can be enabled once in the UEFI mode. Not sure it made any difference, but I emphasized that I needed accurate information. He assured me I could enter system setup through UEFI and enable RAID. Apparently there is something about enabling it in the BIOS mode that disables UEFI.
I will know if it was accurate information in a few days when I attempt to do it. I will use your virtual disc information as plan B.
I WILL POST THE RESULTS.
That's a new one, but since it is an integrated device, I guess I can see that. Good luck.
OK, just a follow up. I successfully deployed the RAID 5 array. It was TRUE that RAID is totally unavailable from within UEFI mode, but it was easy to create two RAID 5 virtual discs from within the BIOS. I created on at 100gb for the OS, because an MBR type disc can't be larger than 2.2tb. Then I created the second disc with all of the remaining space. Then once I deployed the OS I went into disc manager and converted the large 5.6tb disc into a GPT dynamic disc which can be very large.
Concerning backup I simply did a file backup and not an image backup. Then I copied that onto the large D drive after OS deployment. It went very well and is now fully operational.
However, I'm only getting 50-60mbps on both reads and writes. For writes that seems decent, but for reads it seems slow. Are my expectations too high or is there possibly something configured wrong?