Use Cases |
|
Workload Planning |
|
Data Center Requirements | |
Rack Space |
|
Data Center Infrastructure |
|
Data Center Services |
|
Remote Sites (if applicable) | |
WAN |
|
Licensing | |
Licenses |
|
Credentials | |
|
|
VCF on VxRail Configuration Settings | |
Reserve VLANs |
|
|
|
Reserve Hostnames |
|
Passwords |
|
Prepare Data Center Services | |
Prepare DNS |
|
Prepare DHCP |
|
Prepare Active Directory |
|
Prepare Leaf Switches |
|
Prepare Routing Services |
|
External Storage (if applicable) |
|
Use these tables to obtain footprint estimates of the resources for Cloud Foundation on VxRail.
Base Virtual Machines for every Cloud Foundation Management workload domain
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Management | SDDC Manager | 4 | 19 | 800 |
Management | vCenter | 4 | 19 | 694 |
Management | NSX-T Manager 1 | 6 | 24 | 300 |
Management | NSX-T Manager 2 | 6 | 24 | 300 |
Management | NSX-T Manager 3 | 6 | 24 | 300 |
vCenter instance deployed in Cloud Foundation Management workload domain for each VxRail cluster deployed to support a Cloud Foundation VI workload domain. Default size is ‘Medium’ can manage up to 64 nodes.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Management | vCenter | 8 | 28 | 908 |
NSX-T edge gateways deployed in Cloud Foundation VI Workload Domain to support NSX-T networking. The default size is ‘Medium’.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
NSX-T Edge 1 | 4 | 8 | 200 | |
Workload | NSX-T Edge 2 | 4 | 8 | 200 |
Recommend using ‘Large’ size if load balancing is a requirement, or if NSX-T edge services will be shared with many other VI workload domains.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Workload | NSX-T Edge 1 | 8 | 32 | 200 |
Workload | NSX-T Edge 2 | 8 | 32 | 200 |
NSX-T Managers deployed in Cloud Foundation Management workload domain for each Cloud Foundation VI workload domain which does not use a shared NSX-T management instance. The default size is ‘Large’. This size will support more than 64 nodes.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Management | NSX-T Manager 1 | 12 | 48 | 300 |
Management | NSX-T Manager 2 | 12 | 48 | 300 |
Management | NSX-T Manager 3 | 12 | 48 | 300 |
For VI workload domains for Kubernetes, the same ‘Large’ size is deployed in Cloud Foundation Management workload domain for each Cloud Foundation VI workload domain which does not use a shared NSX‑T management instance.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Management | NSX-T Manager 1 | 12 | 48 | 300 |
Management | NSX-T Manager 2 | 12 | 48 | 300 |
Management | NSX-T Manager 3 | 12 | 48 | 300 |
A size of ‘Medium’ can be considered for deployments of fewer than 64 nodes.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Management | NSX-T Manager 1 | 6 | 24 | 300 |
Management | NSX-T Manager 2 | 6 | 24 | 300 |
Management | NSX-T Manager 3 | 6 | 24 | 300 |
NSX-T Global Managers deployed in one of the Cloud Foundation Management workload domains to support NSX-T Federation across regions. The size of the virtual machines depends on the size of the federation under management, either Medium or Large.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Management | NSX-T Global Manager 1 | 6/12 | 24/48 | 300 |
Management | NSX-T Global Manager 2 | 6/12 | 24/48 | 300 |
Management | NSX-T Global Manager 3 | 6/12 | 24/48 | 300 |
The following table lists the sizing to prepare for a vRealize Suite Lifecycle Manager download and deployment.
Note: Cloud Foundation on VxRail does not automate the deployment or the life cycle management of the other vRealize Suite components. vRealize Suite Lifecycle Manager is used to deploy and manage those components.
Component | vCPUs | Memory (GB) | Storage (GB) | |
Management | vRealize Suite Lifecycle Manager | 2 | 6 | 78* 100** |
* Initial deployment
** Added for product binaries
The following table lists the sizing to prepare for the deployment of vSphere with Tanzu workload domain.
Domain | Component | vCPUs | Memory (GB) | Storage (GB) |
Workload | Supervisor Cluster control plane | 12 | 48 | 200 |
Workload | Registry Service | 7 | 7 | 200 |
Workload | NSX-T Edge 1 | 16 | 64 | 400 |
Workload | NSX-T Edge 2 | 16 | 64 | 400 |
Workload | Tanzu Kubernetes Cluster control plane | 6 | 12 | 48 |
Workload | Tanzu Kubernetes Cluster worker nodes | 6 | 12 | 48 |
Following are the core VLANs that must be configured on the data center switches supporting the Cloud Foundation on VxRail platform.
Category | Name | Description | Routable | Subnet size |
VxRail | External Management | VxRail and Cloud Foundation components | Yes | 1 per management component |
Internal Management | VxRail device discovery | No | N/A | |
vMotion | Virtual machine mobility | Yes if routed vMotion is required | 1 per host | |
vSAN | vSphere storage | Yes for stretched cluster | 1 per host | |
NSX-T | Host Overlay | NSX-T host overlay network | Yes. Must route to Edge Overlay. | 2 per host |
NSX-T Edge | NSX-T Uplinks | 2 uplink VLANs for BGP peering of edge & physical switch | Enables BGP peering of edge gateway and physical switch | 2 per edge node |
NSX-T Edge Overlay | NSX-T edge overlay network | Yes. Must route to Host Overlay | 1 per edge node | |
Node | iDRAC | PowerEdge out-of-band management | Yes | 1 per host |
The following table lists the configuration settings required by VxRail Manager to deploy a VxRail cluster.
Category | Detail | Description |
VxRail
| Management Network | VxRail and Cloud Foundation management network subnet. Must be large enough for all VxRail and Cloud Foundation management components. |
External Management | VLAN ID for the management network that passes upstream from the top-of-rack switches | |
Internal Management | VLAN ID for VxRail device discovery. This network stays isolated on the top-of-rack switches. The default VLAN ID is 3939. | |
System | Global settings | Time zone |
NTP server(s) IP Address | ||
DNS server(s) IP Address | ||
Syslog Server | ||
Management | System-Generated ESXi hostnames | ESXi hostname prefix |
Separator | ||
Iterator | ||
Offset | ||
Suffix | ||
Domain | ||
| Customer-supplied Hostnames | Hostname 1 |
Hostname 2 | ||
Hostname 3 | ||
Hostname 4 | ||
ESXi IP Addresses | Can be individual IP addresses or in sequence | |
| vCenter Server | VxRail vCenter Server hostname |
VxRail vCenter Server IP address | ||
VxRail Manager | VxRail hostname | |
VxRail IP address | ||
Networking | Subnet mask | |
Gateway | ||
vMotion |
| IP address pool for vMotion |
Gateway | ||
Subnet mask | ||
VLAN ID | ||
vSAN |
| IP address pool for vSAN |
Subnet mask | ||
VLAN ID | ||
Dell Node | iDRAC | IP address for iDRAC port on each VxRail node |
The following table applies to stretched clusters only.
Category | Detail | Description |
Witness
| Management | Hostname |
IP Address | ||
Subnet Mask | ||
Gateway | ||
VSAN | IP Address | |
Subnet Mask | ||
Gateway | ||
Witness Site | vSphere Host | IP Address |
Network | Witness Traffic Separation | Optional VLAN ID to manage traffic between two sites hosting VxRail nodes and witness site |
vMotion | IP address pool for vMotion | |
Subnet mask | ||
VLAN ID | ||
vSAN | IP address pool for vSAN | |
Subnet mask | ||
VLAN ID | ||
VxLAN | IP address pool for VxLAN | |
Subnet mask | ||
VLAN ID |
This table lists the configuration settings that are required by Cloud Builder to deploy the Cloud Foundation management workload domain on the VxRail cluster platform.
Category | Detail | Description |
Cloud Builder | IP Address | Temporary for Cloud Builder virtual appliance |
NTP | IP Address | |
DNS | IP Address | |
SSO Site Name | Must be the same site name as used for the VxRail cluster. | |
SSO Domain |
| |
DNS Zone Name |
| |
SDDC | Manager | Hostname |
IP Address | ||
Domain Name | ||
NSX-T | Manager (VIP) | Hostname |
IP Address | ||
Manager Node 1 | Hostname | |
IP Address | ||
Manager Node 2 | Hostname | |
IP Address | ||
Manager Node 3 | Hostname | |
IP Address | ||
Appliance Size | (Small, Medium, Large) | |
Static IP Assignment Method | Name of static IP address pool | |
IP address range in CIDR format | ||
Starting IP address to be assigned for host overlay network | ||
Ending IP address to be assigned for host overlay network | ||
Gateway | ||
Dynamic IP Assignment Method | IP address of DHCP server to assign IP addresses to VTEP tunnel endpoints for host overlay network | |
Range of IP addresses in DHCP server to be assigned to VTEP tunnel endpoints for host overlay network | ||
vSphere Objects | Data Center Name | Must match value in VxRail cluster |
Cluster Name | Must match value in VxRail cluster | |
VxRail Distributed Switch Name(s) | Must match values used in VxRail cluster | |
NSX-T Distributed Switch Name | If deploying separate VDS for NSX-T networking. VDS name must be unique in VxRail cluster. | |
vSAN Datastore Name | Must match value used in VxRail cluster | |
vSphere Resource Pools | SDDC Management | Required for consolidated architecture |
SDDC Edge | Required for consolidated architecture | |
User Edge | Required for consolidated architecture | |
User VM | Required for consolidated architecture |
This table lists the configuration settings required to support the configuration of a standard VI workload domain by SDDC Manager.
Category | Detail | Description |
Global
| Domain Name |
|
Datacenter Name | Name of vSphere data center to be configured in vCenter instance supporting VI workload domain | |
NTP | IP Address or pool URL | |
DNS | IP Address | |
SSO Site Name |
| |
SSO Domain | Can join management domain or a new domain | |
NSX-T Host Overlay Network | Static IP Assignment Method | Name of static IP address pool |
IP address range in CIDR format | ||
Starting IP address to be assigned for host overlay network | ||
Ending IP address to be assigned for host overlay network | ||
Gateway | ||
Dynamic IP Assignment Method | IP address of DHCP server to assign IP addresses to VTEP tunnel endpoints for host overlay network | |
Range of IP addresses in DHCP server to be assigned to VTEP tunnel endpoints for host overlay network |
A workload domain can either join an existing NSX-T instance or configure a new NSX-T instance. Use this table only if a new NSX-T management instance is deployed as part of the VI workload domain.
Category | Detail | Description |
NSX-T ASN | Autonomous System Number | ASN for the workload domain edge cluster |
NSX-T Manager | NSX-T management cluster | IP address of NSX-T Manager VIP |
IP address for first NSX-T Manager | ||
IP address for second NSX-T Manager | ||
IP address for third NSX-T Manager | ||
Subnet Mask | ||
Default Gateway | ||
NSX-T Edge Node 1 | Name | Hostname of virtual appliance |
Management IP Address | Must be within workload domain management network subnet | |
Uplink 1 IP Address | IP address for BGP peering on first NSX-T edge uplink VLAN for this workload domain | |
Uplink 2 IP Address | IP address for BGP peering on second NSX-T edge uplink VLAN for this workload domain | |
Overlay IP Address | IP address for overlay network between edge nodes in this workload domain | |
NSX-T Edge Node 2 | Name | Hostname of virtual appliance |
Management IP Address | Must be within workload domain management network subnet | |
Uplink 1 IP Address | IP address for BGP peering on first NSX-T edge uplink VLAN for this workload domain | |
Uplink 2 IP Address | IP address for BGP peering on second NSX-T edge uplink VLAN for this workload domain | |
Overlay IP Address | IP address for overlay network between edge nodes in this workload domain | |
NSX-T Edge Uplink 1 | VLAN | Used for BGP peering with upstream routing services for this workload domain |
NSX-T Edge Uplink 2 | VLAN | Used for BGP peering with upstream routing services for this workload domain |
NSX-T Edge Overlay Network | VLAN | Used for edge overlay network connecting NSX-T edge nodes in this workload domain |
If a new edge cluster is deployed, use this table to capture the BGP neighbors for the NSX-T edge gateway.
Category | Detail | Description |
ASN Value | Autonomous System Number for external routers | |
External Router 1 | IP Address | IP Address for peering with NSX-T edge gateway on first NSX-T edge uplink VLAN |
Password | Neighbor password for BGP peering | |
External Router 2
| IP Address | IP Address for peering with NSX-T edge gateway on second NSX-T edge uplink VLAN |
Password | Neighbor password for BGP peering |
Use this table only if a vSphere for Tanzu supervisor cluster will be configured on the VI workload domain.
Detail | Type | Description |
Pod CIDRs | Internal | Used by Kubernetes pods that run in the cluster |
Service CIDRs | Internal | Used by Kubernetes applications that need a service IP address |
Ingress CIDRs | External | Used for load balancing |
Egress CIDRs | External | Used for NAT endpoint use |
This table lists the configuration settings required to support deployment of the NSX-T edge gateways
External Routers | ||
External ASN | ASN Value | Autonomous System Number for external routers |
External Router 1 | IP Address | IP Address for peering with NSX-T edge gateway on first NSX-T edge uplink VLAN |
Password | Neighbor password for BGP peering | |
External Router 2
| IP Address | IP Address for peering with NSX-T edge gateway on second NSX-T edge uplink VLAN |
Password | Neighbor password for BGP peering | |
NSX-T Edge Gateways | ||
Cluster | Name | Name of NSX-T Edge Cluster |
Internal ASN | ASN Value | BGP Autonomous System Number for NSX-T Edge Gateways |
Edge Node 1 | Name | Hostname of virtual appliance |
Management IP Address | Must be within management network subnet range | |
Uplink 1 IP Address | IP address for BGP peering on first NSX-T edge uplink VLAN | |
Uplink 2 IP Address | IP address for BGP peering on second NSX-T edge uplink VLAN | |
Overlay IP Address | IP address for overlay network between edge nodes | |
Edge Node 2 | Name | Hostname of virtual appliance |
Management IP Address | Must be within management network subnet range | |
Uplink 1 IP Address | IP address for BGP peering on first NSX-T edge uplink VLAN | |
Uplink 2 IP Address | IP address for BGP peering on second NSX-T edge uplink VLAN | |
Overlay IP Address | IP address for overlay network between edge nodes | |
Edge Gateway VLANs | Uplink 1 VLAN | First NSX-T edge uplink |
Uplink 2 VLAN | Second NSX-T edge uplink | |
Edge Overlay VLAN | Used for edge overlay network connecting NSX-T edge nodes | |
Second Site - External Routers | ||
External ASN | ASN Value | Autonomous System Number for external routers |
External Router 1 | IP Address | IP Address for peering with NSX-T edge gateway on first NSX-T edge uplink VLAN |
Password | Neighbor password for BGP peering | |
External Router 2
| IP Address | IP Address for peering with NSX-T edge gateway on second NSX-T edge uplink VLAN |
Password | Neighbor password for BGP peering |
This table lists the configuration settings required to support deployment of the optional Application Virtual Network.
Application Virtual Network Regions | ||
Region A (Local Instance) | Logical Segment | Name of Region A logical segment |
VLAN | VLAN for Region A network | |
IP Addresses | IP address range for Region A network | |
xRegion (Cross-Instance) | Logical Segment | Name of xRegion logical segment |
VLAN | VLAN for xRegion network | |
IP Address | IP address range for xRegion network |
This set of sample syntax is for providing basic guidance on the settings that must be performed on the top-of-rack switches to configure VLANs and a switch port for a Cloud Foundation on VxRail deployment, and configuring a switch support BGP peering for the Application Virtual Network (AVN). The actual code that is required on the top-of-rack switches depends on the existing data center network infrastructure, switch operating system and routing standards.
The sample syntax highlights the following required items:
interface vlan <VxRail External Management>
no shutdown
ip address <gateway>/24
vrrp-group <id>
priority <priority>
virtual-address <virtual gateway>
interface vlan <VxRail Internal Management>
no shutdown
ipv6 mld snooping querier
interface <Host Overlay>
description
no shutdown
mtu 9216
ip address <gateway>/24
ip helper-address <DHCP server IP Address>
vrrp-group <id>
priority <priority>
virtual-address <virtual gateway>
interface ethernet <port>
no shutdown
switchport mode trunk
switchport access vlan <native vlan>
The sample syntax highlights the following required items:
interface vlan <VLAN for AVN Uplink>
no shutdown
mtu 9216
ip address <Gateway IP address for AVN uplink>
ip prefix-list <Router-ESGs route map name> permit <IP address range parameters>
router bgp <External ASN>
maximum-paths ebgp 4
router-id <External router ID>
address-family ipv4 unicast
redistribute connected route-map <Router-ESG route map name>
template external-router-to-ESG
advertisement-interval <value>
password <password saved to Edge Gateways>
timers 4 12
neighbor <IP address assigned to first Edge Gateway>
inherit template external-router-to-ESG
remote-as <ASN assigned to Edge Gateways>
no shutdown
neighbor <IP address assigned to second Edge Gateway>
inherit template external-router-to-ESG
remote-as <ASN assigned to Edge Gateways>
no shutdown