DCN , as part of 16.1 , leverages OpenStack features like Availability Zones (AZ) and provisioning over routed L3 networks with Ironic, to enable deployment of compute node s to remote locations. For example, a service provider may deploy several DCN sites to scale out a virtual Radio Access Network (vRAN) implementation.
DCN has several caveats that must be considered when planning remote compute site deployment(s):
- Only Compute can be run at an Edge site, other services such as persistent block storage are not supported.
- Image considerations - Overcloud images for bare-metal provisioning of the remote compute node s are pulled from the undercloud. Also, instance images for VM s running on nodes will initially be fetched from the control plane the first time they are used. Subsequent instances will use the locally cached image. Images are large files, implying a fast, reliable connection to the Red Hat Director node and control plane is required.
- Networking:
- Latency - a round-trip between the control plane and remote site must be under 100ms or stability of the system could become compromised.
- Drop-outs - If an site temporarily loses its connection to the control plane, then no OpenStack control plane API or CLI operations can be executed until connectivity is restored to the site. For example, existing workloads will continue to run, but no new instances can be started until the connection is restored. Any control functions like snapshotting, live migration, and so on cannot occur until the link between the central cloud and site is restored, as all control features are dependent on the control plane being able to communicate with the site.
Note: Connectivity issues are DCN site specific. Losing connection to one DCN site does not affect other DCN sites.
- This guide recommends using provider networks for DCN workloads at this time. Depending on the type of workloads you are running on the nodes, and existing networking policies, there are several ways for configuring instance IP addressing:
- Static IPs using config-drive in conjunction with cloud-init - Utilizing config-drive as the metadata API server leverages the virtual media capabilities of Nova, which means there are no Neutron metadata agents or DCHP relay required to assign an IP address to instances.
- DHCP relay - Forwards DHCP requests to Neutron at central site.
Note: A separate DHCP relay instance is required for each provider network.
- External DCHP server at site - in this case instance IP addresses are not managed by Neutron .
- Inter-compute node awareness - A limitation of Neutron is that it is not able to identify individual compute node s as remote or local. Therefore each compute node across all DCN sites, including the central cloud, will have a list of every other compute node . Depending on your networking configuration this can happen by:
- Using VXLAN - First, the same Neutron networks must be configured at every site, then every compute node will build a VXLAN tunnel (through the control plane) to all controllers and compute node s regardless if they are remote or local.
- Using VLAN only - This method requires that identical network bridges and VLANs are used across all sites.