Once the BDP is calculated and understood, these findings can be applied to modifying the TCP stack on the PowerScale cluster. All PowerScale clusters do not require TCP stack tuning. Only alter the TCP stack for a needed workflow improvement. Most PowerScale environments do not need TCP tuning. Before applying any TCP changes, ensure the network is clean and reliable by performing basic checks for excessive retransmits, duplicate or fragmented packets, and broken pipes.
PowerScale OneFS is built on FreeBSD. A PowerScale cluster is composed of nodes with a distributed architecture, and each node provides external network connectivity. Adapting the TCP stack to bandwidth, latency, and MTU requires tuning to ensure the cluster provides optimal throughput.
In the previous section, BDP was explained in depth and how it is the amount of data that can be sent across a single TCP message flow. Although the link supports the BDP that is calculated, the OneFS system buffer must be able to hold the full BDP. Otherwise, TCP transmission failures might occur. If the buffer does not accept all the data of a single BDP, the acknowledgment is not sent, creating a delay, and the workload performance is degraded.
The OneFS network stack must be tuned to ensure on inbound, the full BDP is accepted, and on outbound, it must be retained for a possible retransmission. Before modifying the TCP stack, measure the current I/O performance, and then measure it again after implementing changes. Test this tuning guidance in a lab environment before modifying a production network.
The following spreadsheet provides the necessary TCP stack changes based on the bandwidth, latency, and MTU. The changes must be implemented in the order shown and all together on all nodes. Modifying only some variables could lead to unknown results. After making changes, measure performance again.
Note: The snippet below is only for representation. It is imperative to input the calculated bandwidth, latency, and MTU specific to each environment.