This discussion focuses on the combining of Internet SCSI (iSCSI) and virtualization technologies to help boost data center functionality and administrative flexibility.

Featured guest speakers:

  • Brian Sitton, Dell Systems Consultant, Virtualization Solutions Engineering
  • Puneet Dhawan, Dell Systems Engineer, Virtualization Solutions Engineering

Technical Community - Background Reading

Chat Transcript

Dell-KongY Thursday's 3 PM chat will be the main course.
ceri Thursday is part 2?
Dell-KongY @ceri, yes part 2 is iSCSI Security and Best Practices. We have confirmed three big-time players in Dell iSCSI strategy for part 2
ceri Oh, cool. Will we discuss Ipsec?
Dell-KongY You'll have a face-to-face chat with a technical director, storage strategist, and CTO technologist at Thursday's chat for sure :)
ceri Ace.
Dell-KongY Today is just virtualization and iSCSI in general. With focus on Dell's business ready configurations
ceri I've read that paper, looked good
Dell-KongY Awesome
Dell-KongY Speak of the man…gents, let me introduce Brian Sitton. He's one of the lead engineers from Dell PG who designed the business-ready configuration
Dell-KongY Welcome Brian
Dell-BrianS Thanks
ceri Hello Brian, was saying I'd read that paper on the weekend. Was a good read
Dell-KongY Brian was also instrumental in putting together the networking paper as well
Dell-BrianS Glad it was useful
DELL-ScottH Hello Brian!
Dell-KongY Welcome T_moore! welcome Agldave!
AGLDave Howdy
Dell-KongY Please right-click and open in new window; otherwise, it will kick you from the chat
Dell-KongY Welcome Sthersh
sthersh Hi
Dell-KongY You can also select Action, Recent Room History to get recent chat dialogue
erson Hi all
Dell-KongY Welcome Erson!
Dell-KongY Alright, I guess we can get started. Welcome everyone. Today's chat is on iSCSI and virtualization with a focus on Dell's business-ready configurations. I have with me the regular TechCenter SMEs, Jeff and Scott
erson Hi Jeff
Dell-KongY In addition, we have Brian Sitton, systems engineer from Dell's Virtualization Solution Engineering team. Brian was one of the lead engineers who designed and delivered Dell's business-ready configuration for ESX 3.5 U4. He's also working on the vSphere 4 bundle
ceri Could Brian maybe explain how this would change, if at all, for vSphere?
Dell-BrianS vSphere has some interesting new features. Jumbo Frames for one
Dell-JeffS Hey Erson
Dell-BrianS It also has a new storage architecture that allows multiple kernel network connections, and round-robin load balancing. Practically, that means you don't have to use Etherchannel anymore for the iSCSI load balancing
Dell-KongY Before I forget, this chat is part one of a two-part series focused around iSCSI
Dell-KongY Welcome D_glynn!
ceri Etherchannel was to avoid the single session per target?
DELL-ScottH Hello puneet !
Dell-KongY Welcome Puneet Dhawan!
erson Brians, is that MPIO?
Dell-BrianS Yes, MPIO
Dell_PuneetDhawan Thanks guys. Good to be back on TechCenter chat
Dell-d_glynn Hey Kong, I was catching up on the history
Dell-KongY Puneet is a systems engineer from the Dell VSE team, and he is leading another effort on the Dell business-ready configurations
erson So Hyper-V had at least one feature first then... ;)
ceri Ok, thanks Brian. Very useful
Dell-d_glynn :D
Dell-KongY He also was kind enough to share his design on PowerVault MD3000i and vSphere:
ceri Shall I go again? ;-)
Dell-KongY Again this is the first part of a two-part series
Dell-KongY Please do, Ceri
Dell-KongY The Second part will be a discussion on iSCSI security and best practices led by Jeff S
ceri Okay, the business ready configurations obviously use a lot of I/O modules; if 10GbE were involved, I assume that most of those could go away and VLAN tagging could be used more heavily instead. Any problems with that as a principle?
Dell-KongY Featured will be Dell's EqualLogic Director, Dell's EqualLogic storage strategist, and a Dell technologist
Dell-BrianS No, that sounds like the correct approach for adding 10GbE to that configuration. We are currently talking about it, but I don't know yet if we will be doing lab work with that configuration
ceri That's exactly what I wanted to hear (since it's what I have in my design documentation!) Thanks
Dell-KongY @ceri, please do tell about your design doc ;)
Dell-KongY So how many folks here are using iSCSI and virtualization in their current environments?
ceri I'm at the beginning of spec'ing the next two years worth of ESX infrastructure for my organization; we use FC for storage at the moment, but I want to be able to add in either NFS or iSCSI at a later date
Dell-KongY Welcome, rogerlund!
rogerlund Me
erson Roger lund, you got to be from Sweden?
erson /me holds up both hands to Kongy’s question
ceri So I am going 10GbE because it a) is cheaper with many networks and b) lets me be lazy
rogerlund No, but maybe some distant relative
erson But I should need some recommendations for switches for my SAN network
ceri Just wanted to check that I would have the future flexibility I thought I was going to get ;-)
rogerlund Cisco 3750s. Gig switches of course
Dell-BrianS I really like the idea of 10GbE
erson Rogerlund, only PowerConnect at this shop I'm afraid
Dell-BrianS I have been happy with our 6248 switches. PowerConnect that is
erson But the upgrade path for 10GbE could be a problem with 6224/48
Dell-BrianS For our bundle testing we used 6220 with 10 GB modules in our client chassis connected to the Cisco modules in our bundle chassis
rogerlund Cisco Catalyst 4900m
erson Wasn't there some chatter a while ago about the 6220 not being suitable for iSCSI because of small buffers or something along those lines?
Dell-BrianS Not sure about the iSCSI, I have not heard that chatter
Dell-KongY @erson, how many 10GbE ports do you need?
erson Chatter was on Dell TechCenter discussions
Dell-BrianS Maybe that will add some urgency to my request to do some lab testing with them ;=)
rogerlund What’s best IOPS / MBS with something like I/O Meter, and the new DS Series, with 15,000 SAS drives? What about 1GbE versus 10GbE?
Dell-KongY Welcome Matt_carpenter
erson Kongy, none today, but I'm keeping fabric c open for 10GbE expansion
ceri So what kind of aggregation options do I have with Dell and 10GbE? No desire to buy a 10GbE switch port per blade, so would need some kind of uplink option
Dell-KongY @erson, understood
Dell-KongY Welcome Tonygras
erson I'm guessing there's no idea with 10GbE on an EqualLogic PS6000E so that will probably not be an available upgrade
tonyhgras Hi everybody
Dell-BrianS Back to the 6220 and iSCSI. I think maybe the early firmware had some issues, but the latest firmware resolved them. I believe the 6220 is now certified by Dell EqualLogic
erson Ceri, 6220 is your only option. M8024 is 10GbE only
Dell-BrianS The 6220 can have two add-in modules
erson Yes
Dell-JeffS Hey Tony
Dell-BrianS The first module is either stacking, or 10GbE
ceri Okay, note to self: do not open new tab in this window
tonyhgras Hey jeff
Dell-BrianS The second module is 10GbE only
erson Ceri, as long as we're talking PowerConnect, I'm sure there are Cisco options available as well
Dell-KongY @rogerlund, do you mean PS series enclosures?
ceri Erson, I am talking whatever gets the job done ;-)
Dell-BrianS For our client chassis we use a stack of 4 6220s, with up to 8–10GbE uplinks
tonyhgras Do you have any recommendation against the use of 6224/48 switches for iSCSI?
rogerlund Yes, on 10GbE swtich, best I/O per Mb continuous transfer rate with a PS with 15,000 rpm drives
Dell-BrianS Yes
tonyhgras It's a fact that you need stackable switches for iSCSI, right?
Dell-BrianS For the Cisco modules have similar 10GbE uplink options
tonyhgras I mean for EqualLogic
Dell-BrianS Stackable switches for iSCSI is no longer an absolute requirement for optimal connectivity; no longer required for VMware 4.0, that is. I guess I'm getting a little ahead
ceri Basically, I can get away with a pair of 10GbE uplinks from the blade chassis, I think. Preferably one each from a separate I/O module
tonyhgras Well, what change in that particular?
Dell-BrianS There is a requirement from EqualLogic for connectivity between switches
Dell-KongY @rogerlund, each GbE port on an EqualLogic PS6000 can do 100mbps+ for sequential operations
Dell-BrianS We used the stacking connector, but I believe that you can also use 10GbE links between the switches
tonyhgras There's a requirement for EqualLogic arrays that is difficult to meet without stacking the switches—all the ports in the same LAN
Dell-BrianS Ceri, yes, you can use a couple of 10GbE uplinks. We are looking at some of those connectivity options for our 4.0 update to the papers
ceri Dell-brians, just found the M6220 documentation, thanks ;-) M6220 is not 10GbE to the blade though, correct?
Dell-BrianS Tonyhgras, what is wrong with using 10GbE uplinks, instead of stacking modules?
erson Ceri, nope, only GbE
erson s/nope/yes
Dell-BrianS Ceri, M6220 is 1GbE to blade, correct
ceri Ack. Okay, M8024
Dell-BrianS 8024 is 10GbE all the way. You can have up to eight 10GbE uplink ports, if you use Fiber, or six if you use CX4 copper
tonyhgras The problem is in using Lags from ESX
Dell-KongY Welcome brett!
erson Isn't it about time that you guys start hinting about upcoming PC models? ;)
ceri Looks expensive ;-)
Dell-KongY Welcome Hoosiercab!
Dell-BrianS Tony is correct for ESX 3.5; that goes away with 4.0
tonyhgras Good to hear that
Dell-BrianS Ceri, not as expensive as some...
erson Brians, with 10G base-T it's only 4 right? On the M8024?
rogerlund The above 4900 can do 24 10GbE ports max.
Dell-BrianS Two modules, three ports on each module for uplink, for copper
erson When copper = CX4?
tonyhgras So, basically what change in 4.0 in iSCSI MPIO?
Dell-BrianS Tonyhgras, yes, MPIO
tonyhgras Can 4.0 do load balancing over iSCSI?
ceri Dell-brians, think I'll be looking for a quote tomorrow ;-)
Dell-KongY @ceri :) and @brians way to go!
erson Ceri, I was quoted about $3300 for the M8024
ceri Erson, really? That is decent
erson Yeah
ceri Dell-kongy, doesn't follow that I'm going to buy it ;-)
Dell-BrianS Does that $3300 include any uplink modules?
erson No
erson M8024 port configuration and modules; 24 auto-sensing 1/10GbE switching ports available; 16 internal server 1/10GbE ports; flexible media choices for up to two external 10GbE uplink modules per switch; 4-port SFP+ 10GbE module; 3-port CX4 10GbE copper module; 2-port 10GbE Base-T copper RJ45 module
erson SFP+ is slowly but surely taking over the 10GbE space
ceri Yeah, we are all Fibre. Are those uplink modules hot-swap?
Dell-BrianS I had not seen the two-port RJ45
erson Really nice to have the SFP+DA replacing CX4 as well
erson /me is hoping that the replacement for PC 6200 series will at least have the same module configuration as the M8024
Dell-BrianS No one else is speaking up, so I will have to say, I don't think the module is hot-swappable, but I don't know for sure.
ceri Doesn't really matter, someone will ask me, that's all!
tonyhgras Is the M8024 of any use for iSCSI right now?
erson The M8024 is hot swappable
ceri So I can get a pair of M8024s, a pair of pass-through GbE modules and something for FC in a PowerEdge M1000e, correct?
erson Yes. Make sure you get the matching mezzanine cards as well, obviously
ceri Okay, sold :)
Dell-BrianS And the LOM will of course be 1GbE
tonyhgras The M8024 is cheaper than the Ciscos?
erson For sure
tonyhgras Even the 3130g?
ceri Dell-brians, the LOM on which component? I'm not all that au fait with Dell hardware (yet), sorry
erson Tonyhgras, I think I've never seen a Cisco switch in the $3300 range, so probably yes. Maybe now that they took over Linksys switches and made them their SMB segment
erson Ceri, LAN On Motherboard. Two on the half-height blades and four on the full height
Dell-BrianS The onboard NICs on all the servers are 1GbE. Those will connect to I/O module A1 and A2 in the blade chassis
tonyhgras Thanks Erson. Anyway, the models for the blade enclosure are very limited
ceri Erson, ah, yes. That's what I was thinking; I need the pass-through I/O modules for
erson The PowerEdge R710 blade is a winner in my book with its 18 DIMM slots. The full-height blades have redundant mezzanine cards as well...something the half-height blades don't have, which would mean up to four 10GbE ports on a PowerEdge R710...perfect for redundant connections to iSCSI and internal network
tonyhgras Just a general question, what are people doing for connecting to iSCSI SANs from the blades; do they use integrated switches, or external ones?
erson Tonyhgras, usually external's a much more flexible solution
ceri I can't find a PowerEdge R710 blade. You mean the PowerEdge M710?
erson Ceri, of course...PowerEdge M710... M = blades and R = rack
Dell-BrianS The limitation in internal switches is the physical number of 1GbE ports
tonyhgras Think so, and that leads to the second question: which ones?
Dell-KongY A very nice document put out by Brians and team on networking with PowerEdge M1000e:
ceri Erson, okay, was confused ;-)
erson Tonyhgras, I'm leaning toward two 6224s
tonyhgras Or his big brother, 6248
erson /me has a PowerEdge M1000e and a PowerVault MD3000i and an EqualLogic PS6000e
Dell-KongY me thinks we need a topic on networking, interconnects, and switches :)
erson Tonyhgras, yes, if you need that many ports, yes
Dell-BrianS Wait until our 4.0 documents come out. ;=)
erson Make sure you stack the two switches for best performance/redundancy
ceri Erson, question is whether 24 DIMM slots in that space is more cost-effective or not. Almost certainly not if you're installing ESX on them, I guess
rogerlund Think about this, you’re using iSCSI, which is a cheaper solution than FC, but do not push your luck too much. The Cisco switches will outperform the other options
tonyhgras @brians, when can we expect that to happen?
Dell-BrianS When choosing external switches, you have to figure out how many links you have from the chassis to the external switches, and then from the switches to the EqualLogic arrays
ceri Dell-kongy, I've been comparing Dell, Sun, HP, and IBM, and, basically, if I can understand the interconnect documents, it makes it a lot easier to decide that I might want to buy it. IBM is a big lose on that point
erson Rogerlund, Cisco will outperform in features, not so much at all in regard to performance
ceri Dell-kongy, so such a session might well be a good idea
Dell-BrianS I don't think I can comment on dates, but our Web site said coming soon...
tonyhgras That's the point, Erson, is there any real performance advantage?
rogerlund If you start losing packets or having performance problems because of your cheap switch, then what? Never have your infrastructure your weakest link?
erson A switch doesn't lose packets just because its not the most expensive switch on the market
tonyhgras @rogerlund, define cheap switches...
tonyhgras Dell's 6224 or M6220 can get the work done, right?
rogerlund Anything other than a Mitel, Cisco, etc.
erson Tonyghras, for sure
Dell-BrianS A long time ago, we chose the Cisco over PowerConnect for our bundles. One reason is because EqualLogic had not certified the PowerConnect. Now they have, and I see that as significant when choosing which switch
ceri If the buffers are deep enough to deal with bursts then you're probably just fine
rogerlund I had a Dell engineer tell me once not to use Dell; not say anything bad about them—this was with a PS 3000 series demo.
tonyhgras Well can the 6224, for example, handle the traffic to three or four EqaulLogics?
Dell-BrianS I think that goes back to those early firmware versions. We use PowerConnect for all our infrastructure, and only add a few Cisco switches for specific bundle testing. I have been happy with the PowerConnects
tonyhgras The price difference is huge
rogerlund You want something that can handle maxed throughput all the time, without maxing the CPU
erson Shuffling packets at wire speed isn't particularly hard these days...the problem is all the features in the switches
tonyhgras Seems kind of a waste to me to use 10K switches for the iSCSI network
rogerlund I mean, if you spend 30K–60k times 5 for five for SANs, what’s 10k–15k x 2 for two 10GbE switches?
Dell-BrianS Normally, your iSCSI bottleneck will be the back-end storage array, and not the network
Dell-KongY We're getting close to closing time
erson That’s why you pay for features when getting Cisco, Extreme, get the performance with pretty much every brand. In the enterprise switch segment there are very few bad eggs
Dell-KongY I'd like to thank everyone for joining this chat, especially Brians from the VSE team. Thanks Brian!
ceri Yes, thanks Brian
erson Great job Brian!
Dell-BrianS I think the feature set of the PowerConnect and the Cisco blade switches are similar
rogerlund Follow me on twitter. Rogerlund
Dell-KongY Thanks everyone for your questions and insights
DELL-ScottH Hey, am I late for the chat?
Dell-KongY Erson, as always, you are $$$ :)
erson Hmm, need to find that review on the 6248 and equivalent switches from HP, Extreme, Cisco, and I think two more
Dell-KongY Thank you Puneet
ceri Agreed there too, thanks Erson
erson Scotth, at least for the official chat it's a perfect time to spill the beans on the PC road map :)
DELL-ScottH Lol
tonyhgras Thanks everybody; this was a really interesting chat
rogerlund Follow my blog:
Dell-KongY Oh, don't forget to join Jeffs on Thursday for a chat on iSCSI security and best practices
tonyhgras Scotth, when will we have the transcript?
erson I'm obviously there...sounds like a great chat
Dell-KongY Come with your questions; we have the director, the strategist, and the technologist!
Dell-BrianS In the lab we use *.*.*.* ;=) Is there a better way?
Tonyhgras Can we have a second part on this topic once the documents for 4.0 will be out?
Dell-KongY @tonyhgras, sure
Dell-BrianS That sounds good
Dell-KongY And we'll have Brian come back once those documents go live
tonyhgras Excellent
Dell-KongY Also, transcript of this chat will be available some time tomorrow
erson /me waves his Hyper-V flag
tonyhgras Thanks Kongy
Dell-KongY This was an open virtualization iSCSI talk; Hyper-V, Xen Server, and vSphere were all fair game
ceri Okay, g'night all. See you Thursday
Dell-KongY But we did have a fruitful talk on interconnects and switches
Dell-KongY G'nite Ceri
Dell-KongY Your welcome, Tonyhgras
erson Windows Server 2008 R2 will be RTM shortly with lots of new Hyper-V features, so you could probably bundle that with chat for the 4.0 documents regarding network, interconnects, and so forth
Dell-KongY @erson, it's like you know the Dell road map or something like it ;)
erson /me has an NDA in his back pocket; I do wish /me would work in this chat. I'm severely IRC damaged obviously
Dell-BrianS Goodnight, I need to go finish some edits on a document or something
Dell-KongY Thanks again, Brian
tonyhgras Okay, goodnight everybody, have a nice Canada day...
erson /me is dreaming of a double-width, full-height eight-CPU Nehalem-ex blade
Dell-KongY @erson, me too
erson With 16 DIMM slots per socket
Dell-KongY He he
erson Hmm, guessing it would be hard to fit though
Dell-KongY And hard to cool :(
erson Yes. I've seen some bits pointing toward it; will be possible to have dual-socket Nehalem-ex
Dell-KongY And it would sound like a 747 taking off every time it was loaded :)
erson I'm assuming some Nehalem-ex blade is in the works since it will for sure be an awesome CPU
HoosierCAB Guess I'll be reading the transcript; stupid payroll system
Dell-KongY Sorry to hear that, Hoosiercab
erson Awesome chat today, everyone! Hope to see you all (and you who are reading this transcript) on July 2 at 3 PM CDT to talk about iSCSI security and best practices. Bedtime here in Sweden, so goodnight