I’ve had some 5000/5500 units in production for many years and have been quite happy with them. The one thing we never did was “stack” them into combined storage pools. I’m wondering if people have had experience with this and if there were significant performance gains
Related to the 6000/6100 series, I was hoping to get a definitive answer from someone here regarding the use of non-dell drives. I’ve had zero issues with my 5000/5500 units but there seems to be a great deal of conflicting info online regarding this.
Yes, combining members into a single pool can increase performance. Especially if you use HIT/ME, HIT/LE or with proper ESX license, MEM. This optimizes MPIO especially for multi-member pools. The data for each volume is striped across the available members. This means more cache, CM resource and drives are working for all the volumes in that pool. Also with current firmware if one member if busier than the other, the will swap out pages to bring the latency back into parity.
No, you can't use non Dell drives in a 6000/6100 series arrays. If you have been using non-Dell drives with your 5000/5500 you are very lucky. Especially with longevity, recent firmware from the drive vendors have addressed a number of issues with premature failure of drives. The firmware update utility would not work on non-Dell drives.
Social Media and Community Professional#IWork4DellGet Support on Twitter - @dellcarespro
is there some sense as to what kind of performance improvements can be seen on SATA 5000 arrays, I know that's a subjective question, I guess I'm wondering if in general it's just something you should do in most cases .is it worth going through as it's something that can't be backed out of without resetting the arrays.
It's very hard to quantify. But definitely improvement in IOs per sec, and lower latency. Especially when you can use HIT or MEM, the performance is nearly linear as you add members.
FYI: If you have enough space, you can back it out and go back to single member pools. This might require a loaner array to accomplish.
Reason's NOT to merge members into single pool.
1.) You can't tolerate the possibily of all volumes going offline if a single member should fail. Members are designed with as much redundancy as possible, but failure is still a possibility. (since the data is striped across members, all members with data must be online ) By default volumes will only stripe across three members in a pool if there are more than three, it will alternate which three members have what volumes.
2.) You are near the max connection count in each pool already. The connection limit is per pool. So if you were at 600 per pool right now, merging them would put you over the 1024 limit.
3.) You want to leverage Synchronous Replication which requires two pools.
Nice thing is mergering logicaly happens near instantly. It then creates balance plans to restripe the data across the new members. This is done in the background as a lower priority operation vs. new I/O requests from a server.