We implemented a virtual and 25 GbE physical network for the Oracle public, vMotion, and HammerDB benchmarking applications.
As the following figure shows, there is redundancy at every level for high availability and network load balance:
At the physical database server level (R740-prod-n1 and R740-prod-n2), both the 25 GbE network ports are available to carry Oracle public and vMotion traffic. The traffic is, however, logically separated at the vDS level by dedicated VLAN networks (VLAN 382 for Oracle public traffic, VLAN 400 for vMotion traffic) and dedicated DVUplink paths (Uplink 1 is active for Oracle public traffic while Uplink 2 is active for vMotion traffic). Similarly, the applications traffic (in this case, HammerDB benchmarking application traffic) has its own dPG and active/active DVUplink paths to the dedicated management and applications server (R630-Mgmt-App- Srvr).
For performance reasons, we implemented end-to-end jumbo frames for vMotion traffic by: