Dennis Smith Welcome guys! We are going to give it a few more minutes for everyone to join
bala hi
bala sir
bala wat is dell force
bala wat is dell force
Dennis Smith Having a little technical problems, some people are having a hard time getting into the chat.
erson hi all
Lance Boley @bala Dell Force 10 is our networking product line.
Dennis Smith hi erson
Lance Boley Erson ...
Dell-Rod_Mercado hi there... Rod Mercado from Dell Networking here
erson bala:Dell bought network company Force10 networks and is now their premier datacenter networking products
Lance Boley ROD ...
bala Lance Boley hw it is featured sir
Dell TechCenter Hey everyone
Dennis Smith Ok guys
Dennis Smith todays chat is going to be on the Dell Force10 MXL 40GbE blade switch
Dennis Smith Joining us today are Rod and Manash from our networking team here at Dell
Dennis Smith guys would you like to introduce yourselves
bala i am bala
DELL-Manash Hi Guys! I am Manash Kirtania. I am a product manager at Dell Networking and am currently managing MXL
Dell-Rod_Mercado Sure.... I'm Rod Mercado... Product Marketing for Dell Networking... I've been working on several of our blade switches including our most recent addition, the Dell Force10 MXL Blade Switch
Dennis Smith Ok, can one of you give us an overview of the MXL switch and what it brings to our customers?
bala Force10 MXL blade interconnect, a 40Gb Ethernet switch for the M1000e Blade chassis. Part of our Virtual Network Architecture, the MXL switch is designed as a high performing, high density switch that can support the most demanding workloads in traditional, virtual, public and private cloud environments
Dell-Rod_Mercado exactly... so it's a high performance high density switch that we are featuring for multiple environments...
Dennis Smith here's a link to the Dell VNA page on DTC del.ly/VNA
Dennis Smith or this if that shortcut doesn't work http://en.community.dell.com/techcenter/networking/w/wiki/3489.dell-virtual-network-architecture.aspx
bala usages
DELL-Manash Some of the key points are already stated on the slide... MXL offers very high bandwidth 40G capable blade switch that offers flexibility and modularity using Flex IO modules
erson I hope your are updating the M-Series I/O Guide with this new switch http://en.community.dell.com/techcenter/extras/m/white_papers/20074368.aspx  
Dennis Smith I think that is in the works...Rod can probably comment on that
Dennis Smith Any of you guys have or plan on getting this switch?
bala costly?
erson We recently upgraded to 10GbE but didn't that this one was about to released
erson +know
erson +be
erson Not sure we would have gone for it anyway since we have PowerConnect everywhere else and 40GbE is a bit overkill for us at the moment (even with 40GbE to 4x10GbE adapters available)
Alexander Nimmannit We just got the M8024-k..... so probably not in the near future
erson yeah, we go tthe M8024-k as well
Dell-Rod_Mercado yes... The Blade IO Guide will be updated very soon
erson The Force10 MXL do bring something new and unique to the table though. Quad-port 10GbE mezzanine and NDC
Dell-Rod_Mercado One of the features of the MXL is hugh density... the 40GbE QSFP ports can be split using breakout cables.... so you can get up to 24x10GbE ports from a single blade switch
erson If you need 4x10GbE per blade this bladeswitch and quad-port mezzanines/NDCs are probably cheaper than a two pairs of M8024-k and two dual-port mezz./NDCs per server
erson and it also gives you more room for any other interface you might need right now or in the future
erson like FibreChannel and/or Infiniband
DELL-Manash MXL allows you to double the server density in M1000e enclosures using 32 M420 quarter slot blades. For high density deployments like this, MXL offers high performance using 40G
erson or even more 10GbE-ports
DELL-Manash 40G ports can be realized as 4x10G using breakout cables
erson If I have a pair of Force10 MXL and a chassi full of M420, will every M420 get two 10GbE ports or do I need two pairs of Force10 MXL for that?
MichaelD fabric b and c needs to be populated for every M420 to have all it's 10Gbe ports active
MichaelD fabric A is active on all, but half of the M420s would be on B and the other half on C for the secondary NICs
erson ok, so with M420 I really need to populate fabric A/B/C to maximize network bandwidth
MichaelD Correct
erson so what about mezzanines and NDCs for the Force10 MXL?
erson Can I use the Force10 MXL in fabric A with the latest revision of M1000e chassis?
MichaelD I believe the MXL can be used in fabric A based on what I've read
Dell-Rod_Mercado For FabricA two MXLs provide full redundancy for the M420.
erson What brands of mezzanines will be available with quad-port 10GbE to be used with the Force10 MXL in fabric B/C?
erson and the same question but with regards to NDCs (or SNA if you prefer that acronym)
Dell-Rod_Mercado We don't currently have a quad-port 10GE mezzanine nor Select Network Adapter
erson ok, so what is the point with 32-port 10GbE internal switch then?
Dell-Rod_Mercado If we did the MXL would be able to support it though. On Fabric A or Fabrics B and C
MichaelD IIRC the M420 can only do 4 interfaces (2 A, 2 B or C) total
MichaelD expansion capacity for the future would be one reason
Dell-Rod_Mercado For now, the M420 is the only device that requires 32 internal ports.
erson ok, that just blew my mind... why on earth release a switch that supports quad-port 10GbE mezzanines/NDCs but not release such mezzanines/NDCs
erson You certainly don't need 240GbE external ports if you can only use 160GbE of internal bandwidth (using dual-port 10GbE)
erson There must be some quadport 10GbE mezzanines/NDCs in the near future?
MichaelD could the full heights not also make use of all 32 internal ports? thought they had 4 interefaces each to A, B and C fabrics?
erson MichaelD: nope, that doesn't matter. Since a full height is just the same as two half-height.
erson For a pair of Force10MXL in for example Fabric B you have 4 internal 10GbE ports available for each half-height slot (exkluding the M420 which are a special case server) mezzanine slot for fabric B.
Dell-Rod_Mercado Right now we needed 32 ports for the M420. Our chassis is capable of quadport 10GE on Fabric A, B, and C. The intention is that the switch helps future proof your intrastructure as demands increase.
erson Quad-port 1GbE uses two 25x25mm network controllers. Intel for example has dual-port 10GbE controllers available with the same 25x25mm size. So it's certainly possible to make quadport 10GbE mezzanines (and possibly NDCs).
erson what are you future proofing you've already filled you chassi with M420s? The chassi? Even with M420s you still are only using half of the internal ports.
Dell-Rod_Mercado It's certainly possible and something that our development team is looking into. At the moment, the MXL is designed to work with our latest servers. Once our server gets refreshed again, we will be sure to have networking that is also completely interoperable.
Dell-Rod_Mercado For the M420s we use the entire internal bandwidth.
erson how many 10GbE can every M420 have?
Dell-Rod_Mercado 32 servers in the chassis on fabric A connect up to two MXL with 32 internal ports each
Dell-Rod_Mercado Each M420 has a dual port 10GE LOM. So two ports of 10GE for each M420
erson Yeah, but the mezzanines for a full chassi of M420s are split between fabric B and C.
Dell-Rod_Mercado Also regarding the 240GE possible uplinks. I think many of the customers would use some uplink ports for stacking. You could have 8 ports of 10GE for the uplink and then do 80 GE stacking (four QSFP+ ports).
erson But you're right about fabric A. That would be a valid use case.
erson how many Force10 MXL can I stack (I already know that answer) but this should be in the chat log :)
Dell-Rod_Mercado 6 switches in a stack
erson Dennis Smith: dude, you told us when we asked you as the Dell Storage Forum that there were numerous brands of quadport 10GbE mezzanines being released. Perhaps you thought we meant the dualport 10GbE mezzanines that already existed?
Dennis Smith Hmm,
Dennis Smith I think you are right..i was confused :)
DELL-Manash Another point worth mentioning here is that MXL supports converged LAN and SAN traffic
DELL-Manash Are any of you using converged IO?
Alexander Nimmannit We're doing FCoE at the moment but looking more to wards an NFS/iSCSI future
MichaelD Nope, and I've been fighting against converging them.
Alexander Nimmannit *towards
erson One thing that isn't mentioned in the slide is that you can only use one 4-port 10GBASE-T module per bladeswitch
DELL-Manash @erson, you are right...only one 10G T module per switch
MichaelD We had a quasi converged network when we first deployed iSCSI and it was a pure nightmare as every little problem on the LAN side caused blips on the SAN
erson are the modules also line-rate, even the 80Gbit 2-port QSFP+?
Dell-TrentRocks The 10G BASE-T limitation is due to power. We had the same limiation on the M8024-k and that is why we had integrated SFP+ ports
erson The 32 internal 10GbE and two fixed external QSFP+ are line-rate according to the data sheet
Dell-TrentRocks Question for erson and the rest of the chatroom. I find that people are still nervouse about converged IO (SAN and LAN on same wire). Would a quadport adapter help that. Two ports can be dedicated to SAN and the other two for LAN.
Dell-TrentRocks Is that a valid use case?
Dell-TrentRocks MichaelID - did you use DCB on the Ethernet network or was it regular Ethernet
MichaelD it was regular ethernet with VLANing
DELL-Manash @Alexander, can you share what has been the primary driver for moving from FCoE towards NFS/iSCSI?
Dell-TrentRocks I think that is the problem. EqualLogic is adamant that when using converged iSCSI it has to be done with DCB.
Dennis Smith Ok guys we have about 5 minutes left...any other questions? Rod, Trent, Manash anything else you guys want to cover
erson I have no problem with SAN and LAN on the same wire but right now we have dedicated NICs for iSCSI.
Dell-TrentRocks @erson, why dedicated NICs?
Dell-TrentRocks @erson, are you using DCB on the network?
erson We had enough NICs so we saw no point in mixing it up if we didn't need to.
erson no, we're not using DCB
Dell-TrentRocks How much headroom do you have on your network? How much extra bandwidth do you have unused to allow for when conjections happens.
erson We have a dedicated network for EqualLogic with stacked rack-based switches.
Dennis Smith Thanks everyone for joining. As usual the trascript will be posted on Dell TechCenter in the next few days. Join us next week as we talk about the Converged Blade Data Center!
erson Next time we're surely going for the upcoming bladechassi-based EqualLogic so that will require us to redesign this. Not sure right now what we will do then.
erson Thanks to Rod, Trent and Manesh for answering all our questions.
erson Dennis: I hope @DellTechCenter annonces on twitter when the I/O Guide is updated with the Force10 MXL
Dennis Smith Erson, I'll let you know as soon as I do
Dennis Smith See everyone next week!
erson Dennis: great
erson bye all