Home > Storage > ObjectScale and ECS > Product Documentation > Dell ECS with NGINX (OpenResty) > ECS with NGINX deployments
NGINX can be deployed in a single, highly available, or global fashion. In all deployments, NGINX main configuration file, nginx.conf, contains the directives on where to forward the requests, health checks, and how to distribute the requests across the servers. The configuration file is organized in a modular way in which blocks of definitions or directives are encompassed in a set of braces { }. Definitions within these braces are referred to as “contexts”. The configuration files can be simple in which a simple context or multiple nested contexts are specified or embedded Lua scripts are used for more complex processing.
The main context contains details on how HTTP requests and TCP requests are handled and forwarded to a pool of backend servers in addition to any health checking and monitoring. If the http context is defined, the HTTP headers are analyzed and forwarded based on the content of the request. If the context is streamed, requests are forwarded directly to a pool of backend servers for handling. Within the http or stream context is a server context that defines the port that NGINX is listening to, a load balancing algorithm to distribute requests among the resources, as well as health checks or other directives on how to process requests. As previously mentioned, NGINX provides round-robin, least connect, and ip-hash balancing algorithms. Either a domain name system (DNS) names or virtual IPs of NGINX web servers are presented to clients.
The example images of NGINX with ECS deployments in this section highlight object and file protocol access. Objects are accessed using HTTP/HTTPS and NFS over TCP. For NFS, it is recommended that a load balancer be used for high availability purposes only and not for balancing load across the ECS nodes. More detailed information about how to employ NGINX with ECS when using NFS is described in a later section of this white paper.