Fri, 14 Jul 2023 13:16:33 -0000
|Read Time: 0 minutes
The Dell Networking MX8116n FEM acts as an Ethernet repeater, taking signals from an attached compute sled and repeating those signals to the associated lane on the external QSFP56-DD connector. The MX8116n FEM includes two QSFP56-DD interfaces, with each interface providing up to four 100 Gbps connections to the chassis and eight internal 100 GbE server facing ports.
The Dell PowerSwitch Z9432F-ON fixed switch serves as the designated FSE of the MX platform and can support MX chassis deployed with 100 GbE or 25 GbE-based compute sleds. The switch comes equipped with 32 QSFP56-DD ports that provide uplinks, Virtual Link Trunking interconnect (VLTi), and fabric expansion connections.
The goal of this blog is to help you understand port-mapping information about MX8116n FEM, where the module is connected to NIC cards in compute sleds internally on one side and Fabric Switching Engine on the other side on external ports.
Figure 1. Port mapping of dual MX8116n FEM ports to NIC ports
Sled 1 through Sled 4 use Port 2 on the MX8116n, while Sled 5 through Sled 8 use Port 1.
Figure 2. MX8116n internal port mapping
The MX7000 chassis supports up to four MX8116n FEMs in Fabric A and Fabric B. Figure 3 shows one MX8116n FEM module that has two QSFP56-DD 400 GbE ports that can be split into 4x100 GbE to FSE facing ports and 8x100 GbE to facing internal sled NIC ports.
Figure 3. MX7000 chassis front and back physical view with IOMs and sleds port mapping
The MX8116n FEM can operate at 25 GbE and 100 GbE. The 25 GbE solution can support on both dual and quad port NICs, while the 100 GbE solution is supported on dual port NIC only. For the following examples in this blog, the PowerSwitch Z9432F-ON port mapping on 100 GbE dual port NIC using QSFP56-DD cables and 25 GbE dual port and quad port NICs using QSFP56-DD and QSFP28-DD.
The interfaces used on the Z9432F-ON are arbitrary. QSFP56-DD interfaces on the Z9432F-ON can be connected in any order.
Each port group in PowerSwitch Z9432F-ON contains two physical interfaces. The following examples shows the ports of the first port group 1/1/1 that contain interfaces 1/1/1-1/1/2 and the ports of the last port group 1/1/16 that contain interfaces 1/1/31-1/1/32. The port mode for each port interface can be configured in the port group configuration.
The following port group settings are required for 100 GbE dual port mezzanine cards for the Z9432F-ON:
port-group 1/1/1
profile unrestricted
port 1/1/1 mode Eth 100g-4x
port 1/1/2 mode Eth 100g-4x
port-group 1/1/16
profile unrestricted
port 1/1/31 mode Eth 100g-4x
port 1/1/32 mode Eth 100g-4x
Once the port modes are configured and the connections are made, the MX8116n ports auto negotiate to match the port operating mode of the Z9432F-ON interfaces. The internal servers facing ports of the MX8116n auto-negotiate with the mezzanine card port speed of 100 GbE.
Figure 4 shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP56DD based optic or cable:
Figure 4. Z9432F-ON port mapping for 100 GbE solution
The following port group settings are required for 25 GbE quad port NIC using QSFP56-DD on the Z9432F-ON:
port-group 1/1/1
profile restricted
port 1/1/1 mode Eth 25g-8x
port 1/1/2 mode Eth 400g-1x
port-group 1/1/16
profile restricted
port 1/1/31 mode Eth 25g-8x
port 1/1/32 mode Eth 400g-1x
Figure 5 shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP56DD based optic or cable:
Figure 5. Z9432F-ON Port mapping for 25 GbE quad port solution for QSFP56DD based optics and cables
The 25 GbE quad port NIC solution can use QSF28-DD based optics and cables. The following configuration shows the final state required for 25 GbE quad port NICs:
port-group 1/1/1
profile restricted
port 1/1/1 mode Eth 25g-8x
port 1/1/2 mode Eth 400g-1x
port-group 1/1/16
profile restricted
port 1/1/31 mode Eth 25g-8x
port 1/1/32 mode Eth 400g-1x
Figure 6. Z9432F-ON Port mapping for 25 GbE quad port solution with QSFP28-DD based optics and cables
For the required 25g-4x port mode operation, the profile should stay in the default unrestricted setting. Unlike quad port deployments, dual port deployments can use both even and odd ports on the Z9432F-ON.
port-group 1/1/1
profile unrestricted
port 1/1/1 mode Eth 25g-8x
port 1/1/2 mode Eth 400g-1x
port-group 1/1/16
profile unrestricted
port 1/1/31 mode Eth 25g-8x
port 1/1/32 mode Eth 400g-1x
Figure 7 shows the interface numbering for each sled and corresponding MX8116n port when using a QSFP56DD based optic or cable:
Figure 7. Z9432F-ON Port mapping for 25 GbE dual port solution for QSFP56DD based optic or cable
The following configuration shows the final state required for 25 GbE dual port mezzanine cards, the profile should stay in the default unrestricted setting:
port-group 1/1/1
profile unrestricted
port 1/1/1 mode Eth 25g-4x
port 1/1/2 mode Eth 400g-1x
port-group 1/1/16
profile unrestricted
port 1/1/31 mode Eth 25g-4x
port 1/1/32 mode Eth 400g-1x
Figure 8. Z9432F-ON Port mapping for 25 GbE dual port solution with QSFP28-DD based optics and cables
Dell PowerEdge MX Networking Deployment Guide
Dell Technologies PowerEdge MX 100 GbE Solution with external FSE blog
Dell PowerEdge MX7000 Chassis User Guide
Mon, 26 Jun 2023 20:31:38 -0000
|Read Time: 0 minutes
The Dell PowerEdge MX platform is advancing its position as the leading high-performance data center infrastructure by introducing a 100 GbE networking solution. This evolved networking architecture not only provides the benefit of 100 GbE speed but also increases the number of MX7000 chassis within a Scalable Fabric. The 100 GbE networking solution brings a new type of architecture, starting with an external Fabric Switching Engine (FSE).
The diagram shows only one connection on each MX8116n for simplicity. See the port-mapping section in the networking deployment guide here.
Figure 1. 100 GbE solution example topology
The key hardware components for 100 GbE operation within the MX Platform are described below with a minimal description.
The MX8116n FEM includes two QSFP56-DD interfaces, with each interface providing up to 4x 100Gbps connections to the chassis, 8x 100 GbE internal server-facing ports for 100 GbE NICs, and 16x 25 GbE for 25 GbE NICs.
The MX7000 chassis supports up to four MX8116n FEMs in Fabric A and Fabric B.
Figure 2. MX8116n FEM
The following MX8116n FEM components are labeled in the preceding figure:
Note: The 100 GbE Dual Port Mezzanine card is also available on the MX750c.
Figure 3. Dell PowerEdge MX760c sled with eight E3.s SSD drives
The Z9432F-ON provides state-of-the-art, high-density 100/400 GbE ports, and a broad range of functionality to meet the growing demands of modern data center environments. Compact and offers an industry-leading density of 32 ports of 400 GbE in QSFP56-DD, 128 ports of 100, or up to 144 ports of 10/25/50 (through breakout) in a 1RU design. Up to 25.6 Tbps non-blocking (full duplex), switching fabric delivers line-rate performance under full load.L2 multipath support using Virtual Link Trunking (VLT) and Routed VLT support. Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including OSPF and BGP routing support.
Figure 4. Dell PowerSwitch Z9432F-ON
Note: Mixed dual port 100 GbE and quad port 25 GbE mezzanine cards connecting to the same MX8116n are not a supported configuration.
There are four deployment options for the 100 GbE solution, and every option requires servers with a dual port 100 GbE mezzanine card. You can install the mezzanine card in either mezzanine slot A, B, or both. When you use the Broadcom 575 KR dual port 100 GbE mezzanine card, you should set the Z9432F-ON port-group to unrestricted mode and configure the port mode for 100g-4x.
PowerSwitch CLI example:
port-group 1/1/1
profile unrestricted
port 1/1/1 mode Eth 100g-4x
port 1/1/2 mode Eth 100g-4x
Note: The 100 GbE solution deployment, 14 maximum numbers of chassis are supported in single fabric, and 7 maximum numbers of chassis are supported in dual fabric using the same pair of FSE solution.
In a single fabric deployment, two MX8116n can be installed either in Fabric A or Fabric B, and the corresponding slot of the sled in slot-A or slot-B can have the 100 GbE mezzanine card installed.
Figure 5. 100 GbE Single Fabric
In this option, four MX8116n (2x in Fabric A and 2x in Fabric B) can be installed and combined to connect Z9432F-ON external FSE.
Figure 6. 100 GbE Dual Fabric combined Fabrics
In this option four, MX8116n (2x in Fabric A and 2x in Fabric B) can be installed and connected to two different networks. In this case, the MX760c server module has two mezzanine cards, with each card connected to a separate network.
Figure 7. 100 GbE Dual Fabric separate Fabrics
In this option two, MX8116n (1x in Fabric A and 1x in Fabric B) can be installed and connected to two different networks. In this case, the MX760c server module has two mezzanine cards, each connected to a separate network.
Figure 8. 100 GbE Dual Fabric single FEM in separate Fabrics
Dell PowerEdge Networking Deployment Guide
A chapter about 100 GbE solution with external Fabric Switching Engine
Mon, 22 May 2023 18:49:51 -0000
|Read Time: 0 minutes
Network interface card partitioning (NPAR) allows users to minimize the implementation of physical Network interface cards (NICs) and separates Local Area Network (LAN) and Storage Area Network (SAN) connections. NPAR improves bandwidth allocation, network traffic management, and utilization in virtualized and non-virtualized network environments. The number of physical servers may be fewer, but the demand for the NIC ports is more.
The NPAR feature allows you to use a single physical network adapter for multiple logical networking connections.
Creating multiple virtual NICs for different applications uses Operating System (OS) resources. Deploying NPAR on the NIC will reduce the OS resource consumption and put most of the load on the NIC itself.
Note: Not every implementation requires NPAR. NPAR benefits depend on the server NIC and the network traffic that should run on that NIC.
This blog describes how to validate, enable, and configure NPAR on a Dell PowerEdge MX Platform through the server System Setup and the MX compute sled Server Templates within Dell Open Manage Enterprise – Modular (OME-M).
The MX750c compute sled and QLogic-41262 Converged Network Adapter (CNA) have been used in the deployment example described throughout this blog.
NPAR is not enabled by default on the MX compute sled NIC. This section demonstrates how to verify the current settings through the following methods:
Note: The following figures show NIC status without NPAR enabled for all the techniques.
MX 750c Sled server iDRAC settings
OME-M server Edit Template settings
Windows Network adapter settings
VMware Network Adapters settings
You can configure NPAR device settings and NIC Partitioning on the MX compute sled through the server System Setup wizard.
The QLogic-41262 CNA shown in this example supports eight partitions per CNA port. In the following deployment example, we create four NIC partitions. However, only two partitions are used: one for Ethernet traffic and one for FCoE traffic.
To enable and configure NPAR on a server NIC through the System Setup wizard:
To configure the device settings:
Note: Do not enable NParEP-Mode. Enabling NParEP-Mode will create eight partitions per CNA port.
To configure NIC partitioning for Partition 1:
To configure NIC partitioning for Partition 2:
In this example partition-3 and partition-4 are not used. To disable NPar for Partition 3 and 4:
To configure the second CNA port:
The MX compute sled NIC is now configured for NPAR. The following sections describe how to confirm the NIC status with NPAR enabled.
To confirm the NPAR status on the server iDRAC:
MX 740c Sled server iDRAC settings
To confirm the NPAR status in OME-M:
MX OME-M Compute Sled NIC-NPAR enabled
To confirm the NPAR status in the Windows Control Panel:
Windows Network adapter settings
To confirm the NPAR status in VMware vSphere ESXi:
VMware Network adapters settings
OME-M 1.40.00 introduces NPAR NIC configurations in the MX compute sled server template GUI. To configure NPAR settings on the PowerEdge MX platform, you must have administrator access.
Note: Ensure NPAR is enabled through System Setup before using the MX server template GUI for NPAR settings. The MX server template GUI allows users to modify the NIC attributes. To configure NPAR attributes on the MX compute sled using the server template GUI:
OME-M Server Template Edit settings
The PowerEdge MX platform supports profiles with OME-M 1.30.00 and later. OME-M creates and automatically assigns a profile once the server template is deployed successfully. Profiles can be used to deploy with modified attributes on server templates, including NPAR settings. A single profile can be applied to multiple server templates with only modified attributes, or with all attributes.
The following figure shows two profiles from template deployments that have been created and deployed.
Note: The server template cannot be deleted until it is unassigned from a profile.
MX Server Template Profiles
The following sections provide information to assist in choosing the appropriate NIC for your environment.
The following table shows NPAR support details for MX-qualified NICs:
Vendor | Model | Max Speed | Ports | NIC Type | NPAR | Number of Partitions |
Marvell/QLogic | QL41262 | 10/25GbE | 2 | CNA | Yes | 8/port – Total 16 |
Marvell/QLogic | QL41232 | 10/25GbE | 2 | NIC | Yes | 8/port – Total 16 |
Broadcom | 57504 | 10/25GbE | 4 | NIC | Yes | 4/port – Total 16 |
Intel | XXV710 | 10/25GbE | 2 | NIC | No | N/A |
Mellanox | ConnectX-4 LX | 10/25GbE | 2 | NIC | No | N/A |
NIC NPAR and Cisco VIC both provide multiple network connections using limited physical ports of the NIC. In the comparison table below, some key differences between NIC NPAR and Cisco VIC are highlighted.
NIC NPAR | Cisco VIC
|
Industry standard. Works with any supported network switch. | Cisco proprietary LAN and SAN interface card for UCS and modular servers. |
Up to four or up to eight physical function vNIC per adapter port. | Up to 256 PCI-e devices can be configured. Physical limitation on performance of the traffic based on the available bandwidth from the network link. |
Configured in BIOS or iDRAC found in the server. | Requires UCS Fabric Interconnect to associate vNIC ports. |
MAC and Bandwidth allotment assigned and configured in BIOS. | MAC and Bandwidth allotment are determined by a service profile. |
NIC port enumeration is predictable and provides uniform device name assignments across a population of identical and freshly deployed in ESXi host. | Cisco UCS can manage the order that NICs are enumerated in ESXi. |
Below are the NIC teaming options available on MX compute sleds.
Teaming option | Description |
No teaming | No NIC bonding, teaming, or switch-independent teaming |
LACP teaming | LACP (Also called 802.3ad or dynamic link aggregation.) NOTE: LACP Fast timer is not currently supported. |
Other | Other NOTE: If using the Broadcom 57504 Quad-Port NIC and two separate LACP groups are needed, select this option, and configure the LACP groups in the Operating System. Otherwise, this setting is not recommended as it can have a performance impact on link management. |
Profiles Deployment: Profiles with server template deployment
VMware knowledge base: How VMware ESXi determines the order in which names are assigned to devices (2091560)
Dell EMC OpenManage Enterprise-Modular Edition for PowerEdge MX7000 Chassis User's Guide
Wed, 03 May 2023 15:47:10 -0000
|Read Time: 0 minutes
Advanced Network Partitioning (NPAR) is now available in both SmartFabric and Full switch modes with the release of OME-M 2.10.00.
The following diagram shows how the MX9116n ethernet internal port to the Broadcom 57504 25GbE quad port NIC is split into eight partitions:
Figure 1. Advanced NPAR NIC Port Partitions
The BCOM 57504 quad port mezzanine card is used in this example and can only support Adv NPAR. The following combinations are supported:
The Advanced NPAR setting and the configuration workflow are shown in the following image:
For more information, see the latest Dell Technologies PowerEdge MX Networking Deployment Guide. It has information on the following:
See also this blog post:
Dell Technologies PowerEdge MX Platform: Network Interface Card Partitioning
Thu, 09 Feb 2023 20:05:01 -0000
|Read Time: 0 minutes
You can integrate the Dell PowerEdge MX platform running Dell SmartFabric Services (SFS) with Cisco Application Centric Infrastructure (ACI). This blog shows an example setup using PowerEdge MX7000 chassis that are configured in a Multi Chassis Management Group (MCM).
We validated the example setup using the following software versions:
The validated Cisco ACI environment includes a pair of Nexus C93180YC-EX switches as leafs. We connected these switches to a single Nexus C9336-PQ switch as the spine using 40GbE connections. We connected MX9116n FSE switches to the C93180YC-EX leafs using 100GbE cables.
The Dell Technologies PowerEdge MX with Cisco ACI Integration blog provides an overview of the configuration steps for each of the components:
For more detailed configuration instructions, refer to the Dell EMC PowerEdge MX SmartFabric and Cisco ACI Integration Guide.
Dell EMC PowerEdge MX Networking Deployment Guide
Dell EMC PowerEdge MX SmartFabric and Cisco ACI Integration Guide.
Networking Support & Interoperability Matrix
Dell EMC PowerEdge MX VMware ESXi with SmartFabric Services Deployment Guide
Thu, 09 Feb 2023 20:05:01 -0000
|Read Time: 0 minutes
This paper provides an example of integrating the Dell PowerEdge MX platform running Dell SmartFabric Services (SFS) with Cisco Application Centric Infrastructure (ACI).
The example in this blog assumes that the PowerEdge MX7000 chassis are configured in a Multi Chassis Management Group (MCM) and that you have a basic understanding of the PowerEdge MX platform.
As part of the PowerEdge MX platform, the SmartFabric OS10 network operating system includes SmartFabric Services, a network automation and orchestration solution that is fully integrated with the MX platform.
Configuration of SmartFabric on PowerEdge MX with Cisco ACI makes the following assumptions:
The example setup is validated using the following software versions:
Refer to the Dell Networking Support and Interoperability Matrix for the latest validated versions.
The validated Cisco ACI environment includes a pair of Nexus C93180YC-EX switches as leafs. These switches are connected to a single Nexus C9336-PQ switch as the spine using 40GbE connections. MX9116n FSE switches are connected to the C93180YC-EX leafs using 100GbE cables.
The following section provides an overview of the topology and configuration steps. For detailed configuration instructions, refer to the Dell EMC PowerEdge MX SmartFabric and Cisco ACI Integration Guide.
Caution: The connection of an MX switch directly to the ACI spine is not supported.
Figure 1 Validated SmartFabric and ACI environment logical topology
This blog is categorized into four major parts:
Cisco APIC provides a single point of automation and fabric element management in both virtual and physical environments. It helps the operators build fully automated and scalable multi-tenant networks.
To understand the required protocols, policies, and features that you must configure to set up the Cisco ACI, log in to the Cisco APIC controller and complete the steps shown in the following flowcharts.
CAUTION: Ensure all the required hardware is in place and all the connections are made as shown in the above logical topology.
Note: If a storage area network protocol (such as FCoE) is configured, Dell Technologies suggest that you use CDP as a discovery protocol on ACI and vCenter, while LLDP remains disabled on the MX SmartFabric.
The PowerEdge MX platform is a unified, high-performance data center infrastructure. It provides the agility, resiliency, and efficiency to optimize a wide variety of traditional and new, emerging data center workloads and applications. With its kinetic architecture and agile management, PowerEdge MX dynamically configures compute, storage, and fabric; increases team effectiveness; and accelerates operations. The responsive design delivers the innovation and longevity that customers need for their IT and digital business transformations.
VMware vCenter is an advanced management centralized platform application. The flowchart below assumes that you have completed the following prerequisites:
OMNI is an external plug-in for VMware vCenter that is designed to complement SFS by integrating with VMware vCenter to perform fabric automation. This integration automates VLAN changes that occur in VMware vCenter and propagates those changes into the related SFS instances running on the MX platform, as shown in the following flowchart figure.
The combination of OMNI and Cisco ACI vCenter integration creates a fully automated solution. OMNI and the Cisco APIC recognize and allow a VLAN change to be made in vCenter, and this change will flow through the entire solution without any manual intervention.
For more information about OMNI, see the SmartFabric Services for OpenManage Network Integration User Guide on the Dell EMC OpenManage Network Integration for VMware vCenter documentation page.
Figure 2 OMNI integration workflow
A single MX7000 chassis may also join an existing Cisco ACI environment by using the MX5108n ethernet switch. The MX chassis in this example has two MX5108n ethernet switches and two MX compute sleds.
The connections between the ACI environment and the MX chassis are made using a double-sided multi-chassis link aggregation group (MLAG). The MLAG is called a vPC on the Cisco ACI side and a VLT on the PowerEdge MX side. The following figure shows the environment.
Figure 3 SmartFabric and ACI environment using MX5108n Ethernet switches logical topology
ACI: Cisco Application Centric Infrastructure (ACI) AEP: Attachable Access Entity Profile APIC: Cisco Application Policy Infrastructure Controller CDP: Cisco Discovery Protocol EPG: End Point Groups LLDP: Link Local Discovery Protocol MCP: Mis-Cabling Protocol MCM: Multi Chassis Management Group MLAG: Multi-chassis link aggregation group MX FSE: Dell MX Fabric Switching Engines MX FEM: Dell MX Fabric Expander Modules MX IOMs: Dell MX I/O Modules | MX MCM: Dell MX Multichassis Management Group OME-M: Dell OpenManage Enterprise-Modular OMNI: Dell OpenManage Network Integration PC: Port Channel STP: Spanning Tree Protocol VCSA: VMware vCenter Server Appliance vDS: Virtual Distributed Switch VLAN: Virtual Local Area Network VM: virtual machine VMM: VMware Virtual Machine Manager vPC: Virtual Port Channel VRF: Virtual Routing Forwarding |
Documentation and Support
Dell EMC PowerEdge MX Networking Deployment Guide
Dell EMC PowerEdge MX SmartFabric and Cisco ACI Integration Guide.
Networking Support & Interoperability Matrix
Dell EMC PowerEdge MX VMware ESXi with SmartFabric Services Deployment Guide
Sat, 10 Dec 2022 01:45:06 -0000
|Read Time: 0 minutes
The Dell PowerEdge MX platform with OME-M 1.30.00 and later enables you to replace the MX I/O Module (IOM) and MX Fibre Channel (FC) switch module. This capability lets you recover from persistent errors, hardware failure, or other valid reasons. You can replace the IOM through an OME-M Graphical User Interface (GUI) in SmartFabric mode, or you can replace it manually through the Command Line Interface (CLI) in Full Switch mode.
This blog describes how to:
Note: Before starting the MX module switch replacement process, contact Dell Technical Support. For technical support, go to https://www.dell.com/support or call (USA) 1-800-945-3355.
Note: The new IOM includes OS10 factory default version and settings, and all ports are in no shutdown mode by default.
Note: As a best practice, manually back up the IOM startup configuration on a regular basis. If a configuration is not available for the faulty IOM, you must configure the IOM through the standard initial setup process.
The PowerEdge MX platform allows replacement of an MXG610s FC switch module. The MXG610 has a flexible architecture which enables dynamic scale connectivity and bandwidth with the latest generation of Fibre Channel for the PowerEdge MX7000 platform. The MXG610 features up to 32 Fibre Channel ports, which auto-negotiate to 32, 16, or 8 Gbps speed.
Note: Never leave the slot on the blade server chassis open for an extended period. To maintain proper airflow, fill the slot with either a replacement switch module or filler blade.
Note: As a best practice, manually backup the IOM startup configuration on a regular basis. If a configuration is not available for the faulty IOM, you must configure the IOM through the standard initial setup process.
MX SmartFabric mode IOM replacement process
MX Full Switch mode IO module replacement process
MXG610 Fibre Channel switch module replacement process
Upgrading Dell EMC SmartFabric OS10
Manual backup of IOM configuration through the CLI
Interactive Demo: OpenManage Enterprise Modular for MX solution management
Fri, 09 Dec 2022 20:24:42 -0000
|Read Time: 0 minutes
An essential part of any data center disaster recovery plan is the ability to back up and restore the infrastructure. As a best practice, logs, server settings, routing information, and switch configurations should be backed up, with several copies secured in multiple locations.
The Dell Technologies PowerEdge MX Platform OME-M 1.40.00 release includes a significant new addition to the backup and restore feature: SmartFabric settings are now included.
This blog describes the full network backup and restore process, with emphasis on the SmartFabric settings and complete IOM startup.xml configuration.
In the following scenarios, you might need to restore the MX7000 chassis:
Note: If the MX chassis is in a MultiChassis Management (MCM) group, the backup will only be performed on the lead chassis. Member chassis do not need to be backed up because they inherit the configuration from the lead chassis.
MX platform backups include the following configurations and settings:
OME-M 1.40.00 introduces the following:
The OME-M Chassis Backup wizard includes chassis settings and configurations, but it does not include the I/O Modules (IOMs) configuration. Let’s get started by backing up the IOM configurations manually through the CLI.
Manual backup of IOM configuration provides a backup of the running configuration. The running configuration contains the current OS10 system configuration and consists of a series of OS10 commands in a text file that you can view and edit with a text editor. Copy the configuration file to a remote server or local directory as a backup or for viewing and editing.
OS10# copy running-configuration startup-configuration
OS10# copy config://startup.xml config://backup-3-22.xml
The backup file is encrypted and cannot be edited. Only authorized users can retrieve and restore the file on the chassis. Provide the password and secure it in a safe place.
Note: The password must be 8 to 32 characters long and must be a combination of an uppercase, a lowercase, a special character (+, &, ?, >, -, }, |, ., !, (, ', ,, _, [, ", @, #, ), *, ;, $, ], /, §, %, =, <, :, {, I) , and a number.
This section describes the steps to restore an MX chassis and the IOM configuration.
You can use the OME-M GUI to restore the MX chassis working configuration with the backup file we created in the previous section.
The GUI doesn’t restore the IOM configuration, so you can manually restore the IOM configuration through the CLI.
Before you start the restore operation, make sure you have network access to the location where the backup file has been saved.
As a best practice, verify network connectivity to the location where the backup file has been saved.
You can restore the chassis through the OME-M GUI. The GUI doesn’t restore the IOM configuration, so the IOM configuration must be restored manually through the CLI.
OS10# copy config://backup-3-22.xml config://startup.xml
OS10# reload
Proceed to reboot the system? [confirm yes/no]:yes
System configuration has been modified. Save? [yes/no]:no
Caution: Reload the IOMs immediately after restoring the startup configuration, because the running configuration is automatically written to the startup.xml every 30 minutes. Reloading the IOM immediately after each startup configuration restore avoids the startup.xml being overwritten.
Field | Input |
Share Type | Select the share type where the configuration backup file is located. In our example, since we selected the NFS server option for our backup, select NFS. |
Network Share Address | Provide the NFS server NIC IP. |
Network Share Filepath | Enter the same Network Share Filepath used for the backup file, including a forward slash: /MXbackup |
Backup Filename | Type the Backup Filename with extension as shown in the figure above: MXbackup-Feb.bin. |
After the validation completes successfully, the Optional Components section is displayed.
Component | Description |
Restore File Validation Status | Displays the validation status of the restore files
|
Optional Components | Displays the components that you can select for the restore operation. |
Mandatory Components | Displays mandatory components, if applicable. A restoring chassis that is in the MCM group is a Mandatory Component. Mandatory components restore automatically in the restore process. |
Unavailable Components | Displays all other components that were not backed up during the backup process and are therefore unavailable for the restore operation. |
Notes:
Dell Technologies PowerEdge MX7000 Networking Deployment Guide
Dell Technologies OME-M for PowerEdge MX7000 Chassis User’s Guide
Dell Technologies PowerEdge MX7000 Networking Interactive Demos