From a NGINX standpoint, you need to define upstreams to let NGINX act as a proxy between the clients and your other containers
The basic idea is to define upstreams blocks, like
upstream service_foo {
server 10.0.1.2:45678;
server 10.0.1.3:54321;
keepalive 8;
}
The important bits here are:
service_foo is the name we'll use later in the config to identify this service. If you have multiple services, you have to define multiple upstreams.server SOME_IP:PORT; tells NGINX that this one of the endpoints for this service. You can have multiple, like here, 2 containers, with 2 IP and ports, can handle requests for this service. (Look at the documentation for stickiness, round-robin or other way to distribute the load)How to populate those IPs (or internal DNS names) really depends on your service discovery (if you use Consul, then Consul template can help you, but there are millions way to do it)
Then you need to send requests coming to NGINX to those upstreams
In your server block, this would like (usually with many more options to define headers, ...)
server {
listen 443 ssl;
server_name {{ DOMAIN }};
# ... server options
location ~ ^/api/foo {
proxy_pass service_foo;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
default_type application/json;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_set_header Connection "";
}
#... rest of config
The important bits here are:
ssl_* options. But after NGINX, I consider my VPC safe enough and let me services talk in plain HTTPtogether (although I'd love to do gRPC instead, but that's another story). This choice is up to you, depending on how many services and how much trust/risk you handle, to decide where the TLS should stop./api/foo (unless another location is more specific on that path). This is the 'public' route served by NGINXproxy_pass service_foo; directive. In the end, here's what happen with such setup:
/api/foo.. route, NGINX will proxy it to the service_foo upstreams. service_foo reference, NGINX looks at the matching upstream block and pick a server. The core problem usually goes around the service discovery and updates, to dynamically refresh the upstreams. If your services are not scaled-out/created/destroyed intenseively, this could be easier. As you scale, the service discovery component really becomes necessary.
Amongst the open-source reliable and easy solutions I really like, you can use
Hope this helps