Many settings are common through the Solution. The configurations that are tested are outlined in this section.
Solution Admin Host (SAH) networking
The Solution Admin Host is configured for 25GbE with the server internal bridged networks for the Virtual Machines. It is physically connected to the following networks:
- Management Network—Used by the Red Hat OpenStack Director for iDRAC control of all Overcloud nodes.
- Private API Network—Used by the Red Hat OpenStack Director to run Tempest tests against the OpenStack private API
- Provisioning Network—Used by the Red Hat OpenStack Director to service DHCP to all hosts, provision each host, and act as a proxy for external network access
- Public API Network—Used for:
- Inbound Access
- HTTP/HTTPS access to the Red Hat OpenStack Director
- Optional - SSH Access to the Red Hat OpenStack Director
- Outbound Access
- HTTP/HTTPS access for Red Hat Ceph storage, RHEL, and RHOSP subscriptions.
- Used by the Red Hat OpenStack Director to run Tempest tests using the OpenStack public API.
- Inbound Access
Node type 802.1q tagging information
The solution is designed with the idea that different network traffic should be segregated from other traffic. This is accomplished by utilizing 802.1q VLAN Tagging for the different segments. The tables Table 11: OpenStack node type to network 802.1q tagging, Table 12: OpenStack Compute Node for xSP and CSP profile to network 802.1q tagging, and Table 13: Storage node type to network 802.1q tagging summarize this. This segregation is independent of network speed and used by 25GbE configuration.
Network | Solution Admin Host | OpenStack controller | Red Hat Ceph storage |
External Network VLAN for Tenants (Floating IP Network) | Not Connected | Connected, tagged | Not Connected |
iDRAC physical connection to the Management/OOB VLAN | Connected, Untagged | Connected, Untagged | Connected, Untagged |
Internal Networks VLAN for Tenants | Not Connected | Connected, Tagged | Not Connected |
Management/OOB Network VLAN | Connected, Tagged | Not Connected | Not Connected |
Private API Network VLAN | Connected, Tagged | Connected, Tagged | Not Connected |
Provisioning VLAN | Connected, Tagged | Connected, Untagged | Connected, Untagged |
Public API Network VLAN | Connected, Tagged | Connected, Tagged | Not Connected |
Storage Clustering VLAN | Not Connected | Not Connected | Connected, Tagged |
Storage Network VLAN | Connected, Tagged | Connected, Tagged | Connected, Tagged |
Tenant Tunnel Network | Not Connected | Connected, Tagged | Not Connected |
Network | xSP OpenStack compute | CSP - OpenStack compute NFV |
External Network VLAN for Tenants (Floating IP Network) | Not Connected | Connected, Tagged |
iDRAC physical connection to the Management/OOB VLAN | Connected, Untagged | Connected, Untagged |
Internal Networks VLAN for Tenants | Connected, Tagged | Connected, Tagged |
Management/OOB Network VLAN | Not Connected | Not Connected |
Private API Network VLAN | Connected, Tagged | Connected, Tagged |
Provisioning VLAN | Connected, Untagged | Connected, Untagged |
Public API Network VLAN | Not Connected | Not Connected |
Storage Clustering VLAN | Not Connected | Not Connected |
Storage Network VLAN | Connected, Tagged | Connected, Tagged |
Tenant Tunnel Network | Connected, Tagged | Connected, Tagged |
Network | Dell EMC Unity | Dell EMC SC series storage Enterprise Manager | Dell EMC SC series storage array |
External Network for Tenants VLAN (Floating IP Network) | Connected, Tagged | Not Connected | Not Connected |
iDRAC physical connection to the Management/OOB VLAN | Not Connected | Not Connected | Not Connected |
Internal Networks VLAN for Tenants | Connected, Tagged | Not Connected | Not Connected |
Management/OOB Network VLAN | Not Connected | Not Connected | Not Connected |
Provisioning VLAN | Not Connected | Not Connected | Not Connected |
Private API Network VLAN | Not Connected | Not Connected | Not Connected |
Public API Network VLAN | Not Connected | Connected, Untagged | Not Connected |
Storage Network VLAN | Connected, Untagged | Connected, Untagged | Connected, Untagged |
Storage Clustering VLAN | Not Connected | Not Connected | Not Connected |
Tenant Tunnel Network | Not Connected | Not Connected | Not Connected |
Network HCI | OpenStack controller | OpenStack |
External Network VLAN for Tenants (Floating IP Network) | Not Connected | Connected, tagged |
iDRAC physical connection to the Management/OOB VLAN | Connected, Untagged | Connected, Untagged |
Internal Networks VLAN for Tenants | Connected, Tagged | Connected, Tagged |
Management/OOB Network VLAN | Not Connected | Not Connected |
Private API Network VLAN | Connected, Tagged | Connected, Tagged |
Provisioning VLAN | Connected, Tagged | Connected, Untagged |
Public API Network VLAN | Not Connected | Not Connected |
Storage Clustering VLAN | Not Connected | Not Connected |
Storage Network VLAN | Connected, Tagged | Connected, Tagged |
Tenant Tunnel Network | Connected, Tagged | Connected, Tagged |
Solution Red Hat Ceph storage configuration
The Red Hat Ceph storage cluster provides data protection through replication, block device cloning, and snapshots. By default the data is striped across the entire cluster, with three replicas of each data entity. The number of storage nodes in a single cluster can scale to hundreds of nodes and many petabytes in size.
Red Hat Ceph storage considers the physical placement (position) of storage nodes within defined fault domains (i.e., rack, row, and data center) when deciding how data is replicated. This reduces the probability that a given failure may result in the loss of more than one data replica.
The Red Hat Ceph storage cluster services include:
- Ceph Dashboard—Ceph web based monitoring tool hosted on the Controllers.
- RADOS Gateway—Object storage gateway.
- Object Storage Daemon (OSD)—Running on storage nodes, the OSD serves data to the Red Hat Ceph storage clients from disks on the storage nodes. Generally, there is one OSD process per disk drive.
- Monitor (MON)—Running on Controller nodes, the MON process is used by the Red Hat Ceph storage clients and internal Red Hat Ceph storage processes, to determine the composition of the cluster and where data is located. There should be a minimum of three MON processes for the Red Hat Ceph storage cluster. The total number of MON processes should be odd.
- Ceph Manager Daemon (ceph-mgr)—Running on Controller nodes alongside the MON processes, it provides additional monitoring and interfaces to external monitoring and management systems.
The Storage Network VLAN is described in the Red Hat Ceph storage documentation as the public network. The Storage Cluster Network VLAN is described in the Red Hat Ceph storage documentation as the cluster network.
A supported distribution by Red Hat with production level support of Ceph is used in this solution: Red Hat Ceph storage 4, which also includes the Red Hat Ceph Storage Dashboard VM. The Red Hat Ceph Storage Dashboard also includes Red Hat Ceph storage troubleshooting and servicing tools and utilities. Red Hat Ceph Storage Dashboard is installed on the Controllers. Note that:
- The SAH must have access to the Controller and Storage nodes through the Private API Access VLAN in order to manage Red Hat Ceph storage.
- The Controller nodes must have access to the Storage nodes through the Storage Network VLAN in order for the MON processes on the Controller nodes to be able to query the Red Hat Ceph storage MON processes, for the cluster state and configuration.
- The Compute nodes must have access to the Storage nodes through the Storage Network VLAN in order for the Red Hat Ceph storage client on that node to interact with the storage nodes, OSDs, and the Red Hat Ceph storage MON processes.
- The Storage nodes must have access to the Storage Network VLAN, as previously stated, and to the Storage Cluster Network VLAN.