Home > Storage > ObjectScale and ECS > Product Documentation > ECS with F5 Deployment Reference Guide > ECS Configuration
There is generally no special configuration required to support load balancing strategies within ECS. ECS is not aware of any BIG-IP DNS or LTM systems and is strictly concerned, and configured with, ECS node IP addresses, not, virtual addresses of any kind. Regardless of whether the data flow includes a traffic manager, each application that utilizes ECS will generally have access to one or more buckets within a namespace.
Each bucket belongs to a replication group and it is the replication group which determines both the local and potentially global protection domain of its data as well as its accessibility. Local protection involves mirroring and erasure coding data inside disks, nodes, and racks that are contained in an ECS storage pool. Geo- protection is available in replication groups that are configured within two or more federated VDCs. They extend protection domains to include redundancy at the site level.
Buckets are generally configured for a single object API. A bucket can be an S3 bucket, an Atmos bucket, or a Swift bucket, and each bucket is accessed using the appropriate object API. As of ECS version 3.2 objects can be accessed using S3 and/or Swift in the same bucket. Buckets can also be file enabled. Enabling a bucket for file access provides additional bucket configuration and allows application access to objects using NFS and/or HDFS.
Application workflow planning with ECS is generally broken down to the bucket level. The ports associated with each object access method, along with the node IP addresses for each member of the bucket’s local and remote ECS storage pools, are the target for client application traffic. This information is what is required during LTM configuration. In ECS, data access is available via any node in any site that serves the bucket.
In directing the application traffic to an F5 virtual address, instead of directly to an ECS node, load balancing decisions can be made which support HA and provide the potential for improved utility and performance of the ECS cluster.
The following tables provide the ECS configuration used for application access via S3 and NFS utilizing the BIG-IP DNS and LTM devices as described in this document. Note the configuration is sufficient for use by applications whether they connect directly to ECS nodes, they connect to ECS nodes via LTMs, and/or they are directed to LTMs via a BIG-IP DNS.
In our reference example, two five node ECS VDCs were deployed and federated using the ECS Community Edition 3.0 software on top of the CentOS 7.x operating system inside a VMWare ESXi lab environment.
Virtual systems were used to ensure readers could successfully deploy a similar environment for testing and to gain hands-on experience with the products. A critical difference in using virtual ECS nodes, as opposed to ECS appliances, is that the primary and recommended method for monitoring an S3 service relies upon the underlying ECS fabric layer which is not in place in virtual systems. Because of this monitoring using the S3 Ping method is shown against physical ECS hardware in Appendix A, Creating a Custom S3 Monitor.
What follows are several tables with the ECS configuration used in our examples. Each table is preceded by a brief description.
Each site of the two sites has a single storage pool that contains all five of the site’s ECS nodes.
Storage Pool (SP) | Site 1 (federated with Site 2) https://192.168.101.11/#/vdc//provisioning/storagePools |
Name | s1-ecs1-sp1 |
Nodes | ecs-1-1.kraft101.net 192.168.101.11 ecs-1-2.kraft101.net 192.168.101.12 ecs-1-3.kraft101.net 192.168.101.13 ecs-1-4.kraft101.net 192.168.101.14 ecs-1-5.kraft101.net 192.168.101.15 |
Storage Pool (SP) | Site 2 (federated with Site 1) https://192.168.102.11/#/vdc//provisioning/storagePools |
Name | s2-ecs1-sp1 |
Nodes | ecs-2-1.kraft102.net 192.168.102.11 ecs-2-2.kraft102.net 192.168.102.12 ecs-2-3.kraft102.net 192.168.102.13 ecs-2-4.kraft102.net 192.168.102.14 ecs-2-5.kraft102.net 192.168.102.15 |
The first VDC is created at Site 1 after the storage pools have been initialized. A VDC access key is copied from Site 2 and used to create the second VDC at Site 1 as well.
Virtual Data Center (VDC) | Site 1 https://192.168.101.11/#/vdc//provisioning/virtualdatacenter |
Name | s1-ecs1-vdc1 |
Replication and Management Endpoints | 192.168.101.11,192.168.101.12,192.168.101.13,192.168.101.14, 192.168.101.15 |
| Site 2 |
Name | s2-ecs1-vdc1 |
Replication and Management Endpoints | 192.168.102.11,192.168.102.12,192.168.102.13,192.168.102.14, 192.168.102.15 |
A replication group is created and populated with the two VDCs and their storage pools. Data stored using this replication group is protected both locally, at each site, and globally through replication to the second site. Applications can access all data in the replication group via any of the nodes in either of the VDC’s associated storage pool.
Replication Group (RG) | https://192.168.101.11/#/vdc//provisioning/replicationGroups// |
Name | ecs-rg1-all-sites |
VDC: SP | s1-ecs1-vdc1: s1-ecs1-sp1 s2-ecs1-vdc1: s2-ecs1-sp1 |
A namespace is created and associated with the replication group. This namespace will be used for S3 and NFS traffic.
https://192.168.101.11/#/vdc//provisioning/namespace | |
Name | webapp1 |
Replication group | ecs-rg1-all-sites |
Access During Outage | Enabled |
An object user is required for accessing the namespace and created as per Table 6 below.
User | https://192.168.101.11/#/vdc//provisioning/users/object |
Name | webapp1_user1 |
NS | webapp1 |
Object access | S3 |
User key | Akc0GMp2W4jZyu/07A+HdRjLtamiRp2p8xp3at7b |
An NFS user and group are created as shown in Table 7 below. The NFS user is specifically tied to the object user and namespace created above in Table 6.
File user/Group mapping | https://192.168.101.11/#/vdc//provisioning/file/ns1/userMapping/ |
User | Name: webapp1_user1, ID: 1000, Type: User, NS: webapp1 |
Group | Name: webapp1_group1, ID: 1000, Type: Group, NS: webapp1 |
A file-enabled S3 bucket is created inside the previously created namespace using the object user as bucket owner. The bucket is associated with the namespace and replication group. Table 8 below shows the bucket configuration.
Bucket | https://192.168.101.11/#/vdc//provisioning/buckets/ |
Name | s3_webapp1 |
NS: RG | webapp1: ecs-rg1-all-sites |
Bucket owner | webapp1_user1 |
File system | Enabled |
Default bucket group | webapp1_group1 |
Group file permissions | RWX |
Group directory permissions | RWX |
Access During Outage | Enable |
To allow for access to the bucket by NFS clients, a file export is created as per Table 9 below.
File export | https://10.10.10.101/#/vdc//provisioning/file//exports |
Namespace | webapp1 |
Bucket | s3_webapp1 |
Export path | /webapp1/s3_webapp1 |
Export host options | Host: * Summary: rw,async,authsys |
Table 10 below lists the DNS records for the ECS nodes. The required reverse lookup entries are not shown.
DNS records (corresponding reverse entries not shown but are required) | |||
DNS record entry | Record type | Record data | Comments |
ecs-1-1.kraft101.net | A | 192.168.101.11 | Public interface node 1 ecs1 site1 |
ecs-1-2.kraft101.net | A | 192.168.101.12 | Public interface node 2 ecs1 site1 |
ecs-1-3.kraft101.net | A | 192.168.101.13 | Public interface node 3 ecs1 site1 |
ecs-1-4.kraft101.net | A | 192.168.101.14 | Public interface node 4 ecs1 site1 |
ecs-1-5.kraft101.net | A | 192.168.101.15 | Public interface node 5 ecs1 site1 |
ecs-2-1.kraft102.net | A | 192.168.102.11 | Public interface node 1 ecs1 site2 |
ecs-2-2.kraft102.net | A | 192.168.102.12 | Public interface node 2 ecs1 site2 |
ecs-2-3.kraft102.net | A | 192.168.102.13 | Public interface node 3 ecs1 site2 |
ecs-2-4.kraft102.net | A | 192.168.102.14 | Public interface node 4 ecs1 site2 |
ecs-2-5.kraft102.net | A | 192.168.102.15 | Public interface node 5 ecs1 site2 |