Windows Admin Center is a browser-based management tool developed by Microsoft to monitor and manage Windows servers, failover clusters, and hyperconverged clusters.
The AX nodes for Storage Spaces Direct offer software-defined storage building blocks for creating highly available and highly scalable hyperconverged Infrastructure (HCI). The AX nodes are preconfigured with certified components and validated as a Storage Spaces Direct solution that includes Dell EMC PowerSwitch S-Series switches, with simplified ordering and reduced deployment risks. Dell Technologies offers configuration options within these building blocks to meet different capacity and performance points. With Windows Admin Center, you can seamlessly monitor and manage the HCI clusters that are created on these building blocks.
You can download Windows Admin Center version 1910.02 from Microsoft download center and install it on Windows 10, Windows Server 2016, Windows Server 2019, or Windows Server version 1709. You can install Windows Admin Center directly on a managed node to manage itself. You can also install Windows Admin Center on other nodes in the infrastructure or on a separate management station to manage the AX nodes remotely. It is possible to implement high availability for Windows Admin Center by using failover clustering. When Windows Admin Center is deployed on nodes in a failover cluster, it acts as an active/passive cluster providing a highly available Windows Admin Center instance.
The Windows Admin Center installer wizard performs the configuration tasks that are required for Windows Admin Center functionality. These tasks include creating a self-signed certificate and configuring trusted hosts for remote node access. Optionally, you can supply the certificate thumbprint that is already present in the target node local certificate store. By default, Windows Admin Center listens on port 443; the port can be changed during the installation process.
Note: The automatically generated self-signed certificate expires in 60 days. Ensure that you use a certificate authority (CA)-provided SSL certificate if you intend to use Windows Admin Center in a production environment.
For complete guidance on installing Windows Admin Center on Windows Server 2016 and Windows Server 2019 with desktop experience or Server Core, see https://docs.microsoft.com/windows-server/manage/windows-admin-center/deploy/install.
Note: This section assumes that you have deployed the Azure Stack HCI cluster from Dell by using the deployment guidance that is available at: https://dell.com/azurestackhcimanuals.
After the installation is complete, you can access Windows Admin Center at https://managementstationname:<PortNumber> .
Figure 2: Windows Admin Center start screen
For monitoring and management purposes, add the hyperconverged cluster that is based on Dell EMC Solutions for Azure Stack HCI as a connection in Windows Admin Center.
Figure 3: HCI cluster navigation
The Add Cluster window is displayed.
Figure 4: Adding the HCI cluster
Windows Admin Center discovers the cluster and the nodes that are part of the cluster.
The cluster is added to the connection list and Windows Admin Center is configured to monitor and manage the HCI cluster.
To view the dashboard for the HCI cluster that you have added to Windows Admin Center, in the Cluster Manager window, click the cluster name.
This dashboard provides the real-time performance view from the HCI cluster. This view includes total IOPS, average latency values, throughput achieved, average CPU usage, memory usage, and storage usage from all cluster nodes. It also provides a summarized view of the Azure Stack HCI cluster with drives, volumes, and virtual machine health.
You can drill down into any alerts by clicking the alerts tile in the dashboard.
Figure 5: HCI dashboard in Windows Admin Center
To view the server details, click the tools pane and go to Servers > Inventory.
Figure 6: Servers: Inventory tab
Note: The metrics in the figure are for a three-node Azure Stack HCI cluster with all-flash drive configuration.
View the total number of drives in the cluster, the health status of the drives, and the used, available, and reserve storage of the cluster as follows.
To view the drive inventory from the cluster nodes, from the left pane, select Drives, and then click the Inventory tab.
Figure 8: Drives: Inventory tab
The HCI cluster is built using three AX-640 nodes, each with two 1.6 TB NVMe drives and eight 1.92 TB SSD drives.
By clicking the serial number of the drive, you can view the drive information, which includes health status, slot location, size, type, firmware version, IOPS, used or available capacity, and storage pool of the drive.
Also, from the dashboard, you can set the drive options as Light On or Light Off, or Retire or Unretire from the storage pool.
You can manage and monitor the Storage Spaces Direct volumes using Windows Admin Center.
The following features are supported in Windows Admin Center:
To access the volumes on the HCI cluster, select the cluster and, in the left pane, click Volumes. In the right pane, the Summary and Inventory tabs are displayed.
The Summary tab shows the number of volumes in the cluster and the health status of the volumes, alerts, total IOPS, latency, and throughput information of the available volumes.
Figure 9: Volumes: Summary tab
The Inventory tab provides the volume inventory from the HCI cluster nodes. The volumes can be managed and monitored.
Figure 10: Volumes: Inventory tab
Create volumes in Storage Spaces Direct in Windows Admin Center as follows.
Open, expand, delete, or make a volume offline as follows.
Data deduplication helps to maximize free space on the volume by optimizing duplicated portions on the volume without compromising data fidelity or integrity.
Note: To enable data deduplication on an HCI cluster, ensure that the data deduplication feature is enabled on all the cluster nodes. To enable the data deduplication feature, run the following PowerShell command: Install-WindowsFeature FS-Data-Deduplication.
Enable data deduplication and compression on a Storage Spaces Direct volume as follows.
You can use Windows Admin Center to monitor and manage the virtual machines that are hosted on the HCI cluster.
To access the virtual machines that are hosted on the HCI cluster, click the cluster name and, in the left pane, select Virtual machines. In the right pane, the Summary tab and the Inventory tab are displayed.
The Summary tab provides the following information about the virtual machine environment of the HCI cluster:
Figure 11: Virtual machines: Summary tab
The Inventory tab provides a list of the virtual machines that are hosted on the HCI cluster and provides access to manage the virtual machines.
Figure 12: Virtual machines: Inventory tab
You can perform the following tasks from the Windows Admin Center console:
The virtual switches tool in Windows Admin Center enables you to manage Hyper-V virtual switches of the cluster nodes.
The virtual switches tool supports the following features:
You can manage Windows updates on a cluster node. All the updates are performed in cluster-aware mode.
Dell EMC OpenManage Integration with Microsoft Windows Admin Center enables IT administrators to manage the hyperconverged infrastructure (HCI) that is created by using Dell EMC Solutions for Microsoft Azure Stack HCI. Dell EMC OpenManage Integration with Microsoft Windows Admin Center simplifies the tasks of IT administrators by remotely managing the AX nodes and clusters throughout their life cycle.
For more information about the features, benefits, and installation of Dell EMC OpenManage Integration with Microsoft Windows Admin Center, see the documentation at https://Dell.com/OpenManageManuals.
Prerequisites for managing AX nodes are:
Dell EMC AX nodes have a preinstalled Azure Stack HCI license. Storage Spaces Direct Ready Nodes require the installation of an After Point of Sale (APOS) license.
Health Status is the default dashboard that provides details about the Azure Stack HCI cluster nodes.
Figure 14: Health Status dashboard
On the Cluster - Azure Stack HCI page, click the Health Status tab to view the overall health status of the HCI cluster and the health status of the following components of the Azure Stack HCI cluster and nodes:
Selecting the Critical or Warning section in the overall health status doughnut chart displays the nodes and components that are in the critical or warning state respectively.
Select sections in the doughnut chart to filter the health status of the components. For example, selecting the red section displays only the components with critical health status.
Selecting sections of the chart for individual components shows the respective nodes with the component health status listed. Expand the nodes to view the components.
The Inventory tab lists the servers that are part of the cluster.
Clinking a server name on the inventory list provides details about the following components:
Locating physical disks and viewing their status
The Blink/Unblink feature of Windows Admin Center enables you to locate physical disks or view disk status.
Clicking the iDRAC tab displays the Integrated Dell Remote Access Controller dashboard. The dashboard lists the servers that are part of the Azure Stack HCI cluster. By selecting each iDRAC, you can view iDRAC details, such as the iDRAC firmware version and iDRAC IP of the target node, and can directly launch the iDRAC console.
Use the Settings tab in the Dell EMC OpenManage Integration with Windows Admin Center UI to view the latest update compliance report, update the cluster, and configure proxy settings.
To view the latest update compliance report and update the cluster using an offline catalog, Dell EMC OpenManage Integration with Windows Admin Center requires that you configure the settings for the update compliance tools.
In the OpenManage Integration UI, on the Update Tools page under the Settings tab, specify the download locations for the following update compliance tools:
To download the online catalog, update tools, and Dell Update Packages, you must configure the proxy settings unless you can access the Internet without proxy settings.
Configure the proxy settings under the Settings tab in the Dell EMC OpenManage Integration UI.
Viewing update compliance and updating the cluster
Dell EMC OpenManage Integration with Windows Admin Center enables you to view the update compliance details (firmware, driver, application, and BIOS). You can update the Azure Stack HCI cluster by using the cluster-aware update feature of OpenManage Integration.
Use the Update tab of the OpenManage Integration with Windows Admin Center UI to view update compliance and update the cluster.
Note: Cluster Aware Update is a license feature. Ensure that the Azure Stack HCI license is installed before proceeding.
Figure 17: Cluster Aware Update
When the update job is completed, the compliance job is triggered automatically.
Before creating a cluster, ensure that each node is updated with the latest versions of firmware and drivers.
The following table lists known issues and workarounds related to OpenManage Integration with Microsoft Windows Admin Center with Dell EMC Solutions for Microsoft Azure Stack HCI clusters.
Issue | Resolution/workaround |
Running Test-Cluster fails with network communication errors. With USB NIC enabled in iDRAC, if you run the Test-Cluster command to verify the cluster creation readiness or cluster health, the validation report includes an error indicating that the IPv4 addresses assigned to the host operating system USB NIC cannot be used to communicate with the other cluster networks. | This error can be safely ignored. To avoid the error, temporarily disable the USB NIC (labeled as Ethernet, by default) before running the Test-Cluster command. |
The USB NIC network appears as a partitioned cluster network. When the USB NIC is enabled in iDRAC, cluster networks in the failover cluster manager show the networks associated with the USB NIC as partitioned. This issue occurs because the cluster communication is enabled by default on all network adapters, and USB NIC IPv4 addresses cannot be used to communicate externally, which, therefore, breaks cluster communication on those NICs. | Remove the USB NIC from any cluster communication by using the following script: $rndisAdapter = Get-NetAdapter -InterfaceDescription 'Remote NDIS Compatible Device' -ErrorAction SilentlyContinue |
Dell EMC OpenManage Integration for Microsoft System Center is an appliance-based integration with the System Center suite of products.
OpenManage Integration for Microsoft System Center enables full life-cycle management of Dell EMC PowerEdge servers by using integrated Dell Remote Access Controller (iDRAC) with Lifecycle Controller (LC).
OpenManage Integration for Microsoft System Center offers operating system deployment, Azure Stack HCI cluster creation, hardware patching, firmware updating, and maintenance of servers and modular systems. Integrate OpenManage Integration for Microsoft System Center with Microsoft System Center Virtual Machine Manager (SCVMM) to manage your PowerEdge servers in virtual and cloud environments.
Note: This method is applicable only for Storage Spaces Direct Ready Nodes.
Perform compliance checks, bare-metal firmware updates, and firmware updates using the cluster-aware update feature. To perform these tasks, within SCVMM, first discover the Storage Spaces Direct Ready Nodes and create or edit an update source.
Before performing these tasks, ensure that:
Discovering the Storage Spaces Direct Ready Nodes
To perform compliance checks and firmware updates, first discover the Storage Spaces Direct Ready Nodes.
Creating or editing an update source
After discovering the Storage Spaces Direct Ready Nodes, create or edit an update source before performing compliance checks and firmware updates within SCVMM.
Using the online catalog:
Using the offline (Dell Repository Manager) catalog:
Updating the firmware on a bare-metal server
With OpenManage Integration for Microsoft System Center on a bare-metal server, you can update firmware or schedule firmware updates.
9. Click Finish.
Updating the firmware with the cluster-aware feature
With OpenManage Integration for Microsoft System Center, you can update firmware or schedule firmware updates using the cluster-aware feature.
Use the Filter Updates menu to filter the compliance report based on the nature of the update, component type, or server model.
The compliance report of the servers in the selected group is displayed.
These procedures describe how to prepare for and conduct maintenance operations.
Use the following PowerShell commands to ensure that all the requirements are met before proceeding with the maintenance operation of an AX node in an Azure Stack HCI cluster. These steps ensure that all the requirements are met and that no faults exist before placing an AX node into maintenance mode.
After ensuring that the prerequisites are met and before performing the platform updates, place the AX node in maintenance mode (pause and drain). You can move roles or virtual machines and gracefully flush and commit data in the AX node.
For a qualified set of firmware and drivers for AX nodes or Ready Nodes, we recommend that you use an Azure Stack HCI catalog.
You can generate the firmware catalog along with the firmware and drivers by using Dell EMC Repository Manager (DRM) and copy it to a shared path.
AX nodes offer device firmware updates remotely through the integrated Dell Remote Access Controller (iDRAC). For Azure Stack HCI clusters, the recommended option is to use an Azure Stack HCI catalog for a qualified set of firmware and BIOS. Generate the latest Dell EMC Azure Stack HCI catalog file through Dell EMC Repository Manager (DRM) and copy the file to a network location before proceeding with the update process.
A list of available updates is displayed, as shown in the following figure.
For certain system components, you might need to update the drivers to the latest Dell supported versions, which are listed in the Supported Firmware and Software Matrix.
Run the following PowerShell command to retrieve the list of all driver versions that are installed on the local system:
Get-PnpDevice | Select-Object Name, @{l='DriverVersion';e={(Get-PnpDeviceProperty -InstanceId $_.InstanceId -KeyName 'DEVPKEY_Device_DriverVersion').Data}} -Unique | Where-Object {($_.Name -like "*HBA330*") -or ($_.Name -like "*mellanox*") -or ($_.Name -like "*Qlogic*") -or ($_.Name -like "*X710*") -or ($_.Name -like "*intel*") -or ($_.Name -like "*Broadcom*")}
After you identify the required driver version, download the driver installers from https://www.dell.com/support or by using the Dell EMC Repository Manager (DRM) as described in Obtaining the firmware catalog for AX nodes or Ready Nodes using the Dell EMC Repository Manager.
After the drivers are downloaded, copy the identified drivers to AX nodes from where you can manually run the driver DUP files to install the drivers.
After updating the AX node, exit the storage maintenance mode and node maintenance mode by running the following commands:
Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq "<Hostname>"} | Disable-StorageMaintenanceMode
Resume-ClusterNode -Name “Hostname” -Failback Immediate
These commands initiate the operation of rebuilding and rebalancing the data to ensure load balancing.
For the remaining cluster nodes, repeat the preceding procedures for conducting maintenance operations.
Expanding cluster compute or storage capacity are tasks performed during cluster operations. This section provides instructions for performing these tasks.
Example
Figure 20: Expanding the Azure Stack HCI cluster
In an HCI cluster, adding server nodes increases the storage capacity, improves the overall storage performance of the cluster, and provides more compute resources to add virtual machines. Before adding new server nodes to an HCI cluster, complete the following requirements:
Table 2: Options to expand storage capacity of the cluster
Option 1 conditions | Option 2 conditions |
Ensure that the following tasks are completed:
Note: The procedure is applicable only if the cluster and Storage Spaces Direct configuration is done manually.
To manually add server nodes to the cluster, see https://technet.microsoft.com/windows-server-docs/storage/storage-spaces/add-nodes.
In an HCI cluster, expanding storage by adding drives on the available slots on the cluster nodes adds storage capacity of the cluster and improves storage performance. Before the storage expansion, ensure that all disk types and the amount in each node is same and are equal to that of the node in use. Do not combine two different disk types in the same cluster or node. For example, you cannot combine SATA and SAS HDD/SSD drives in same node or cluster.
The following options for expanding the storage capacity of the cluster are supported:
When new disks are added to extend the overall storage capacity per node, the Azure Stack HCI cluster starts claiming the physical disks into an existing storage pool.
After the drives are added, they are shown as available for pooling (CanPool set to True) in the output of the Get-PhysicalDisk command.
Within a few minutes, the newly added disks are claimed in the existing pool and Storage Spaces Direct starts the rebalance job. Run the following command to verify that the new disks are a part of the existing pool:
PS C:\> Get-StorageSubSystem -FriendlyName *Cluster* | Get-StorageHealthReport
CPUUsageAverage : 2.66 %
CapacityPhysicalPooledAvailable : 8.01 TB
CapacityPhysicalPooledTotal : 69.86 TB
CapacityPhysicalTotal : 69.86 TB
CapacityPhysicalUnpooled : 0 B
CapacityVolumesAvailable : 15.09 TB
CapacityVolumesTotal : 16.88 TB
IOLatencyAverage : 908.13 us
IOLatencyRead : 0 ns
IOLatencyWrite : 908.13 us
IOPSRead : 0 /S
IOPSTotal : 1 /S
IOPSWrite : 1 /S
IOThroughputRead : 0 B/S
IOThroughputTotal : 11.98 KB/S
IOThroughputWrite : 11.98 KB/S
MemoryAvailable : 472.87 GB
MemoryTotal : 768 GB
After all available disks are claimed in the storage pool, the CapacityPhysicalUnpooled is 0 B.
The storage rebalance job might take a few minutes. You can monitor the process by using the Get-StorageJob cmdlet.
You can resize volumes that are created in Spaces Direct storage pools by using the Resize-VirtualDisk cmdlet. For more information, see https://technet.microsoft.com/windows-server-docs/storage/storage-spaces/resize-volumes.
If a cluster node fails, perform node operating system recovery in a systematic manner to ensure that the node is brought up with the configuration that is consistent with other cluster nodes.
The following sections provide details about operating system recovery and post-recovery configuration that is required to bring the node into an existing Azure Stack HCI cluster.
Note: To perform node recovery, ensure that the operating system is reinstalled.
The Dell EMC PowerEdge servers offer the Boot Optimized Storage Solution (BOSS) S-1 controller as an efficient and economical way to separate operating system and data on the internal storage of the server. The BOSS S-1 solution in the latest generation of PowerEdge servers uses one or two BOSS M.2 SATA devices to provide RAID 1 capability for the operating system drive.
Note: All Dell EMC Solutions for Azure Stack HCI are configured with hardware RAID 1 for the operating system drives on BOSS M.2 SATA SSD devices. The steps in this section are required only when recovering a failed cluster node. Before creating a new RAID, the existing or failed RAID must be deleted.
This procedure describes the process of creating operating system volumes.
Figure 22: Create a virtual disk
Figure 23: Provide virtual disk name
Figure 25: Initialize configuration
Figure 26: Virtual disk health status
This section provides an overview of the steps involved in operating system recovery on Dell EMC Solutions for Azure Stack HCI.
Note: Ensure that the RAID 1 virtual disk created on the BOSS M.2 drives is reinitialized.
Note: To help reduce repair times when the node is added back to the same cluster after recovery, do not reinitialize or clear the data on the disks that were a part of Storage Spaces Direct storage pool.
For manually deployed nodes, you can recover the operating system on the node by using any of the methods that were used for operating system deployment.
For the factory-installed OEM license of the operating system, Dell recommends that you use the operating system recovery media that shipped with the PowerEdge server. Using this media for operating system recovery ensures that the operating system stays activated after the recovery. Using any other operating system media triggers the need for activation after operating system deployment. Operating system deployment using the recovery media is the same as either retail or other operating system media based installation.
After completing the operating system deployment using the recovery media, perform the following steps to bring the node into an existing Azure Stack HCI cluster:
For instructions on steps 1 through 7, see the Dell EMC Solutions for Microsoft Azure Stack HCI Deployment Guide.