Firmware Device Order for PERC H750, H755, H350, and H355 Storage Controllers (Linux Only)
Download PDFThu, 20 Jul 2023 20:10:45 -0000
|Read Time: 0 minutes
Summary
Dell Technologies provides a feature to the PERC 11 family of controllers that gives users the limited ability to influence the ordering of devices within Linux operating systems.
This DfD tech note is intended to educate customers about this feature and its caveats. It also provides the necessary background about device enumeration.
Introduction
PERC 11-series controllers provide a feature called Firmware Device Order that provides limited operator control of the order of host-visible SCSI devices in compatible Linux distributions[1]. A This feature is called Firmware Device Order (FDO). When enabled, this feature influences the Linux kernel’s SCSI device enumeration (that is, the /dev/sdXX ordering).
This feature is particularly targeted to customers transitioning from PERC 9/10 controllers to PERC 11 on Dell’s 14G PowerEdge servers, while looking to maintain a consistent device order enumeration.
This document describes the design, control, and limitations of this feature.
Background
Linux device enumeration
The PERC device driver presents to the Linux kernel a pseudo-SCSI (Small Computing System Interface) adapter where the configured Virtual Drives (VDs) and Non-RAID drives are individual SCSI targets.
The PERC device driver does not directly control the SCSI disk drive enumeration. It is the kernel’s prerogative, for example, to use /dev/sda to refer to the first discovered drive. The feature in this DfD will enforce an ordering in the revealing of SCSI disk drives to the kernel.
PERC 11
PERC 11-series controllers support the concurrent existence of Non- RAID and Virtual Drives (VDs).
Under Linux, without Firmware Device Order enabled, the PERC driver enumerates any configured Non-RAID drives first, followed by VDs. This results in the Non-RAID drives having lower /dev/sdXX device assignments than VDs when listed alphabetically.
The ordering logic within the two groups – Non-RAID and Virtual Drives – differs between PERC H75x and PERC H35x. For details, see the following table:
Table 1. PERC 11-series default Linux enumeration
Group | Property | PERC H75x | PERC H35x |
1st | Type | Non-RAID | Non-RAID |
Ordering | Enclosure/Slot position order | Discovery order, | |
2nd | Type | Virtual Drives | Virtual Drives |
Ordering | Reverse creation order | Order of creation |
Although creating VDs while the OS is running is a supported PERC operation, note that newly created devices may not adhere to the ordering rules in Table 1. After a restart, those rules apply.
Creating a new VD after deleting Virtual Disks out-of-order might alter the presentation order (that is, deleting a VD other than the last VD, then creating a new VD).
The following table represents an example configuration where a PERC H75x controller has two VDs created and two Non-RAID drives. This ordering is what will appear in a Linux-based operating system enumeration after booting the system.
Table 2. PERC H75x default Linux enumeration example
Type | Description | Block Device |
Non-RAID
| Non-RAID in backplane slot 6 | /dev/sda |
Non-RAID in backplane slot 7 | /dev/sdb | |
Virtual Drives | Second VD created | /dev/sdc |
First VD created | /dev/sdd |
Note that for demonstration purposes, the block device enumeration is assumed to start as /dev/sda. That may not be the case in your system if the Linux kernel discovered other SCSI attached devices prior to enumeration of the drives attached to PERC.
Introducing the Firmware Device Order feature
Functionality
Firmware Device Order (FDO) alters the order of device presentation to the Linux kernel. It adds a third type - the designated boot volume. When enabled, the following order is used:
- Designated boot device
- Virtual Drives (VDs)
- Non-RAID drives
Table 3. PERC 11-series FDO Linux enumeration
Order | FDO enabled |
1st | Boot device |
2nd | Virtual Drives |
3rd | Non-RAID |
Firmware Device Order requires supported PERC 11-series controller firmware and a FDO aware Linux device driver. See the section Minimum required component versions.
Boot device
The boot device specified in the PERC controller will be presented first to the Linux kernel. The boot device may be chosen by the operator, or if none is chosen, the PERC controller automatically determines its designated boot device. Either a Virtual Drive or a Non-RAID drive can be a boot device. The PERC controller and driver use this information regardless of the system’s current boot mode and independent of whether the boot device was used to boot the current running operating system.
See the PERC 11 User’s Guide for further instructions about how to designate a boot device.
Virtual drives
After the optional boot device, the configured Virtual Drives will be presented to the Linux kernel in the order of creation (that is, the 1st VD created is presented 1st, the 2nd VD created is presented second, and so on).
Non-RAID drives
Non-RAID drives are presented after the VDs. Non-RAID drives are presented in the order of PERC’s discovery of the drives during system boot. This may not be the same as the ordering of enclosure/slot position of the drives.
Summary
The following table summarizes the Firmware Device Order behavior for PERC H75x and PERC H35x.
Table 4. PERC 11-series Firmware Device Order Linux enumeration
Group | Property | PERC H75x | PERC H35x |
1st | Type | Boot device | Boot device |
2nd | Type | Virtual Drives | Virtual Drives |
Ordering | Creation order | Creation order | |
3rd | Type | Non-RAID | Non-RAID |
Ordering | Discovery order, Not based on slot | Discovery order, Not based on slot |
How to enable Firmware Device Order
Overview
Firmware Device Order (FDO) is disabled by default. To enable FDO you can use the PERC System Setup Utility or the perccli utility. Note that FDO requires:
- Using or installing a compatible Linux-based operating system
- Using a compatible PERC Linux device driver
- Selecting a preferred boot device (see the Boot device section)
System setup
The PERC 11-series firmware includes a new Human Interactive Interface (HII) setting to enable the Firmware Device Order feature. This setting is on the Advanced Controller Properties page.
- Open the Dell PERC 11 Configuration Utility.
- Select Main Menu > Controller Management > Advanced Controller Properties.
- Select Firmware Device Order, then select the option desired.
- Confirm the change by selecting Apply Change.
Note that a system restart is necessary for an FDO enable or disable operation to take effect. See the section Manage PERC 11 Controllers Using HII Configuration Utility of the User's Guide for steps to enter and navigate in HII.
The perccli utility
You can use the perccli utility to query the current Firmware Device Order setting, and to enable/disable the feature (see the Minimum required component versions section).
To query the current setting:
# perccli /cx show deviceorderbyfirmware
To enable Firmware Device Order:
# perccli /cx set deviceorderbyfirmware=on
To disable Firmware Device Order:
# perccli /cx set deviceorderbyfirmware=off
where x is the controller instance for the PERC 11-series controller being targeted.
Note: A system restart is necessary for an FDO enable or disable operation to take effect.
Operating system support
Overview
The Firmware Device Order feature is only supported on Linux distributions. Enabling the feature on systems that run other operating systems, such as Microsoft Windows or VMware ESXi, will result in no VDs nor Non-RAID drives being visible in these operating systems. If this is attempted, disable the feature, and reboot your system. The contents on the underlying storage/devices are not affected by the setting.
Linux
A Firmware Device Order compatible device driver must be used on Linux-based distributions. Using an incompatible driver causes both VDs and Non-RAID drives to be hidden from the host.
The following table lists the minimum versions of the major Linux distributions that support the Firmware Device Order feature.
Table 5. FDO enabled distributions
Distribution | Inbox driver version |
RHEL 8.2 | 07.710.50.00-rh1 |
RHEL 7.8 | 07.710.50.00-rh1 |
SLES 15 SP2 | 07.713.01.00-rc1 |
Ubuntu 20.04 LTS | 07.710.06.00-rc1 |
Notes:
Not all operating system distribution release versions listed in Table 5 may be supported by your specific system and controlled combination. See the Linux OS Support Matrix on Dell.com to confirm the supported Linux distributions for your system and PERC controller.
Linux 5.x kernels and above probe for block devices asynchronously. Device ordering can be inconsistent because of this, even with FDO enabled. See the OS documentation for custom persistent device alternatives.
Unsupported operating systems
Attempting to boot into an operating system running a device driver that does not support Firmware Device Order will result in no storage being presented to the operating system. If PERC is your boot controller, the OS will fail to start correctly. After the system reboots, the PERC 11- series will display a warning indicating that an incompatible operating system driver was detected.
Figure 1. Critical message displayed with incompatible operating system
If this message appears on your system, it means that you are running an incompatible operating system with Firmware Device Order enabled. (To disable Firmware Device Order, see the System setup section).
Windows
Microsoft Windows is not supported with Firmware Device Order.
VMware ESXi
VMware ESXi is not supported with Firmware Device Order.
Minimum required component versions
This section lists the minimum PERC 11-series component versions required to use the Firmware Device Order (FDO) feature.
Table 6. FDO minimum component versions
Component | PERC H75x | PERC H35x |
Controller Firmware | 52.16.1-4074 | 52.19.1-4171 |
Linux Device Driver | 07.707.51.00-rc1 | 07.707.51.00-rc1 |
perccli Utility | 7.1604.00 | 7.1604.00 |
Note: Not all firmware, driver, and utility version combinations may be supported by your system and controller combination. Visit support.dell.com for the latest component releases for your system and PERC controller.
Summary
The new PERC series-11 Firmware Device Order (FDO) feature enables an alternate presentation order of Virtual Drives and Non-RAID drives. This feature is particularly targeted to those customers on Dell’s 14G PowerEdge who want to transition to PERC 11 from PERC 9/PERC 10. The FDO feature requires a supporting PERC 11-series firmware, an aware device driver, and that the system be running a Linux-based operating system. If you prefer, the feature can be turned off at any time to resume traditional enumeration, or to transition from a Linux environment to another operating system
[1] Includes PERC H750, PERC H755, PERC H350, and PERC H355 storage controllers. See the Minimum required component versions section.
Related Documents
Dell PowerEdge RAID Controller 12
Wed, 10 May 2023 17:18:18 -0000
|Read Time: 0 minutes
Summary
Dell Technologies’ newest RAID controller iteration, PERC 12, which is using the new Broadcom SAS4116W series chip, has increased support capabilities, including 24 Gbps SAS drives, increased cache memory speed to 3200 Mhz, 16-lane host bus type, and, most notably, only one front controller that supports both NVMe and SAS.
PERC 12 card management applications include Comprehensive Embedded Management (CEM), Dell OpenManage Storage Management, The Human Interface Infrastructure (HII) configuration utility, and the PERC command line interface (CLI). These applications enable you to manage and configure the RAID system, create and manage multiple disk groups, control and monitor multiple RAID systems, and provide online maintenance.
Introduction
As storage demands expand and processing loads grow, RAID (Redundant Array of Independent Disks) data protection has become a necessary staple for proper enterprise storage management. Dell PowerEdge RAID Controller (PERC) provides a RAID solution that is powerful and easy-to-manage for enterprise storage data protection needs.
Dell Technologies’ newest RAID controller iteration, PERC 12, has increased support capabilities: 24 Gbps SAS drives, an increased cache memory speed of 3200 Mhz, 16-lane host bus type, and a single front controller that supports both NVMe and SAS.
PERC12 PowerEdge Support
H965i Adapter controller
PERC12 Adapter Card adds an Active Heat Sink (Fan) on the controller, providing additional cooling capabilities, to ensure that the controller is always running at optimum temperature and does not compromise on performance because of overheating. The controller connects directly on the motherboard using a PCIe slot and uses a SlimLine connector (or a NearStack connector) for the SAS/NVMe interfaces.
H965i Front controller
PERC12 Front Card upgrades the hardware design when compared to the previous generation controller. It combines SAS and NVMe support with a single card, eliminating the need to use different controllers for SAS and NVMe supported servers. The controller has a SlimLine connector (or a NearStack connector) for both PCIe and SAS/NVMe interfaces.
H965i MX controller
PERC12 MX Card is designed specifically for MX chassis servers and includes an energy pack similar to other form factors for power backup in case of power loss. This helps ensure proper customer cache offload to avoid any data loss. The controller connects directly on the motherboard using a PCIe slot and uses a SlimLine connector (or a NearStack connector) for the SAS/NVMe interfaces.
PERC 12 Supported Operating Systems
Windows Server
- Windows Server 2019
- Windows Server 2022
Linux
- RHEL 8.6
- RHEL 9.0
- SLES 15 SP4
- Ubuntu 22.04
VMware
- ESXi 7.0 U3
- ESXi 8.0
See Dell Technologies Enterprise operating systems support for a list of supported operating systems by specific server for the PERC 12 cards.
Hardware RAID Performance
NVMe Key RAID Metrics (PERC11 / PERC12)
Table 1. Latency / Rebuild
Key NVMe RAID 5 Metrics (PERC11 / PERC12)
Table 2. IOPS / Bandwidth
Key SAS RAID Metrics (PERC10 / PERC11 / PERC12)
Table 3. IOPS / Latency Reduction During Rebuild
Key SAS RAID Metrics (PERC10 / PERC11 / PERC12)
Table 4. IOPS / Bandwidth
Conclusion
Dell PowerEdge RAID Controller 12 or PERC 12 continues to innovate by supporting hardware RAID for NVMe drives. The PERC 12 series consists of PERC H965i Adapter, PERC H965i Front, and PERC H965i MX.
Dell PowerEdge Boot Optimized Storage Solution – BOSS-N1
Fri, 27 Jan 2023 21:58:02 -0000
|Read Time: 0 minutes
Summary
Our latest generation HW RAID BOSS solution (BOSS-N1) incorporates NVMe Enterprise class M.2 NVMe SSDs. It includes important RAS features such as rear or front facing drives on our new rack servers and full hot-plug support, so a server does not need to be taken offline in case of an SSD failure. When operating a RAID 1 mirror, a surprise removal and addition of a new SSD automatically kicks off a rebuild on the new RAID 1 member SSD that was added, so there is no need to halt server operations.
Available on the newest generation of PowerEdge systems, BOSS-N1 provides a robust, redundant, low-cost solution for boot optimization.
Introduction
The Boot Optimized Storage Solution (BOSS-N1) provides key, generational feature improvements to the highly popular BOSS subsystem and its existing value proposition. It incorporates an NVMe interface to the M.2 SSDs to ensure high performance and the latest technology. BOSS was originally designed to provide a highly reliable, cost-effective solution for separating operating system boot drives from data drives on server-internal storage. Many customers, particularly those in the Hyperconverged Infrastructure (HCI) arena and those implementing Software Defined Storage (SDS), require separating their OS drives from data drives. They also require hardware RAID mirroring (RAID 1) for their OS drives. The main motivation for this is to create a server configuration optimized for application data. Providing a separate, redundant disk solution for the OS enables a more robust and optimized compute platform.
Figure 1. Installing the BOSS-N1 monolithic controller module
The Boot Optimized Storage Solution (BOSS-N1) is a simple, highly reliable and cost-effective solution to meet the requirements of our customers. The NVMe M.2 devices offer similar performance as 2.5” SSDs and support rear or front facing drive accessibility with full hot-plug support on monolithic platforms and includes surprise removal. Our design frees up and maximizes available drive slots for data requirements.
BOSS-N1 provides a secure way of updating the controller firmware
- Each of the firmware components is authenticated before being stored to firmware slot
- Authentication requires the use of public and private asymmetric key pair. This protected key pair is uniquely generated for Dell through a hardware security module (HSM) server.
- BOSS-N1 firmware updates can be updated using DUP from both In-band (Operating System) and Out-of-band (iDRAC) interfaces
You can manage BOSS-N1 with standard well-known management tools such as iDRAC, OpenManage Systems Administrator (OMSA), and the BOSS-N1 Command Line Interface (CLI).
BOSS-N1 hardware components
Figure 2. BOSS-N1 monolithic card
Figure 3. BOSS-N1 modular
Key features of BOSS-N1:
- Supports one (1) or two (2) 80 mm M.2 Enterprise Class NVMe SSDs
- M.2 devices are read-intensive (1 DWPD) with 480GB or 960GB capacity
- Fixed function hardware RAID 1 (mirroring) or single drive RAID 0
- Rear or front facing module for quick and easy accessibility to the M.2 SSDs on monolithic platforms
- Full hot-plug support on monolithic platforms
- M.2 drive LED functionality on monolithic platforms
- Managing BOSS-N1 is accomplished with standard, well-known management tools such as iDRAC, OpenManage Systems Administrator (OMSA), and the BOSS-N1 Command Line Interface (CLI)
BOSS-N1 supported operating systems
Windows Servers
- Windows Server 2019
- Windows Server 2022
Linux
- RHEL 8.6
- SLES 15 SP4
- Ubuntu 20.04.4
VMware
- ESXi 7.0 U3
- ESXi 8.0
References
- For more information about BOSS-N1, see the BOSS-N1 User’s Guide.
- For more information about iDRAC, such as the iDRAC User’s Guide and the iDRAC Release Notes, see the Dell Support site.
- For more information about OpenManage Server Administrator, see the OMSA 9.5 User’s Guide.