
Reference Architecture for Active System 1000 with VMware vSphere
Page 10
VLAN management related tasks. The external ports of the PowerEdge M I/OA are automatically all
part of a single link aggregation group (LAG), and thus there is no need for Spanning-tree. The
PowerEdge M I/OA can use Data Center Bridging (DCB) and Data Center Bridging Exchange (DCBX) to
support converged network architecture.
The PowerEdge M I/OA provides connectivity to the CNA/Network adapters internally and externally to
upstream network devices. Internally the PowerEdge M I/OA provides thirty-two (32) connections. The
connections are 10 Gigabit Ethernet connections for basic Ethernet traffic, iSCSI storage traffic, or
FCoE storage traffic. In a typical PowerEdge M1000e configuration with 16 half-height blade server
ports, 1-16 are used and 17-32 are disabled. If quad port CAN/Network adapters or quarter-height
blade servers are used, then ports 17-32 will be enabled.
The PowerEdge M I/OA includes two integrated 40Gb Ethernet ports on the base module. These ports
can be used in a default configuration with a 4 X 10Gb breakout cable to provide four 10Gb links for
network traffic. Alternatively these ports can be used as 40Gb links for stacking. The Dell PowerEdge M
I/OA also supports three different types of add-in expansion modules, which are called FlexIO
Expansion modules. The modules available are: 4-port 10Gbase-T FlexIO module, 4-port 10G SFP+
FlexIO module, and the 2-port 40G QSFP+ FlexIO module.
The PowerEdge M I/OA modules can be managed through the PowerEdge M1000e Chassis Management
Controller (CMC) GUI. Also, the out-of-band management port on the PowerEdge M I/OA is reached by
connection through the CMC’s management port. This one management port on the CMC allows for
management connections to all I/O modules within the PowerEdge M1000e chassis.
For more information on Dell PowerEdge M I/O Aggregator, see
http://www.dell.com/us/business/p/poweredge-m-io-aggregator/pd.
Dell Networking MXL 10/40GbE Blade Switch: The MXL switch provides 1/10/40GbE. The switch
supports 32 internal 1/10GbE ports, as well as two fixed 40GbE QSFP+ ports and offers two bays for
optional FlexIO modules. To ensure room to grow, uplinks via the FlexIO modules can be added or
swapped as needed in the future. Choose from 2-port QSFP+, 4-port SFP+ or 4-port 10GBASE-T FlexIO
modules to expand and aggregate (bi-directional) bandwidth up to 160 Gigabit per second. The MXL
switch provides the flexibility to mix and match the FlexIO module types.
Like the M I/OA above, the MXL switch includes two integrated 40Gb Ethernet ports on the base
module. These ports are used in a default configuration with a 4 X 10Gb breakout cable to provide four
10Gb links for network traffic. Alternatively these ports can be used as 40Gb links for stacking. The
MXL Switch provides stacking capability for up to six interconnected blade switches allowing both
stacking across chassis and local switching of traffic within the chassis. For more information, see
http://www.dell.com/us/business/p/force10-mxl-blade/pd.
Dell Networking S4810 Switches: The Dell Networking S-Series S4810 is an ultra-low-latency 10/40
GbE Top-of-Rack (ToR) switch purpose-built for applications in high-performance data center and
computing environments. Leveraging a non-blocking, cut-through switching architecture, the S4810
switch delivers line-rate L2 and L3 forwarding capacity with ultra-low latency to maximize network
performance. The compact S4810 switch design provides industry leading density of 48 dual-speed 1/10
GbE (SFP+) ports as well as four 40GbE QSFP+ uplinks to conserve valuable rack space and simplify the
migration to 40Gbps in the data center core. (Each 40GbE QSFP+ uplink can support four 10GbE ports
with a breakout cable).
Commentaires sur ces manuels