As full-stack developers, we often build applications comprised of frontend UIs, backend APIs, databases, caches, queues, workers. These components end up running as distributed services.
Docker has become ubiquitous for running such multi-service apps. And a core requirement is carefully configuring service networking and ports. The ability to selectively expose internal vs external ports is key in Docker to balance access requirements and security.
This is where the much debated expose
vs ports
in Docker Compose come in. Through real-world examples, docker networking internals and security implications, we will dive deeper into these concepts from an expert lens to help decouple container port exposure strategies.
The Dual Lives of Container Ports
Before understanding expose
and ports
, it‘s important to grasp how ports behave conceptually in containers compared to virtual machines or hosts.
Container ports actually live in two worlds:
1. The container‘s own networking namespace: This includes internal loopback and any other container/host ports reachable over docker network.
2. The host machine interfaces or public internet: Routing incoming requests from these external sources into the container‘s port namespace requires additional port publishing.
This dual existence allows selectively exposing ports either privately or publicly. The concepts of expose
and ports
build upon these characteristics.
How expose
Works: Container Namespace Only
The expose
instruction makes ports accessible to only other containers inside a container network created with Docker compose, Kubernetes pod networks, or default bridges.
services:
db:
image: mysql
expose:
- 3306
This mysql container will be reachable on 3306 from other containers, assuming connectivity exists over the docker-defined network.
But crucially, the host machine or public internet cannot reach it. The port lives only inside the container‘s internal networking namespace.
Some key benefits this provides:
Service Discovery: Containers can discover and communicate over expose
ports using container/service DNS names Docker sets up.
Internal Isolation Hides ports from unwanted outside traffic for a level of security.
However, expose
by itself has limitations around port collisions, service addressing and sprawl of docker networks which I discuss later.
But first, understanding ports
will help contrast the behavior.
How ports
Works: Publishing Beyond Container Namespace
The ports
instruction binds container ports to host ports (or public interface IPs in hosted environments).
For example:
services:
web:
image: nginx
ports:
- 8080:80
This makes the web server port 80 accessible on IP of the Docker host on port 8080 externally.
Crucially, published ports are available from both the host/outside world AND internally from other linked containers.
The port has gained additional exposure, instead of living only inside the container namespace.
Some benefits of published ports:
External Access: Allows traffic from clients/users outside docker networks like web browsers or mobile apps to reach the containerized app.
Service Discovery: Communicating containers can still discover published ports using container name DNS, like with expose
.
There are also security implications with opening ports I discuss later.
With both approaches explained, when should each be used?
Deciding Between expose vs ports
The first guiding principle I follow in Docker networking is:
Containers should only directly expose ports necessary for external entrypoints to the whole app.
For example, a web server or API endpoint needs to expose ports for public usage. But database and cache should remain internal.
Applying this principle helps minimize the container attack surface open to the outside world.
Here are some specific examples with expose vs ports:
Web/API Services: These frontend UI and backend API services require external access – ports
should be used.
ports:
- 8080:3000
Databases: Only internal access needed from app containers – expose
makes sense.
expose:
- 3306
What about inter-service communication between internal containers?
Internal only Service Discovery: Limitations of expose
Using expose
purely for internal container access may seem to solve the problem. But in practice, it can quickly introduce other networking complexities:
Port Collisions: Different containers needing standard ports like 3306 or 5432 can cause overlaps when relying on expose
. ports
provides host port mapping to avoid this.
Container DNS Resolution: expose
ports rely on container DNS names working. These may break in complex Docker environments.
Network Sprawl: Larger apps can spawn many custom docker networks making inter-access difficult.
For these reasons, in most cases, I still utilize ports
for internal container communication even when no external access is needed:
ports:
- 127.0.0.1:3306:3306
This publishes the port locally to custom host IP address. Other containers can connect via associated container name DNS, which is more robust across different networks.
Additionally, the port mapping provides certainty around what exact interface address is being exposed rather than relying on Docker defined container DNS.
So in summary, rely on expose
minimally only for ports truly meant to be hidden. Use ports
widely including inter-container communication for stability.
Docker Networking Modes and Publishing Behavior
These default Docker networking behaviors apply across common network modes like bridge, host or macvlan networks.
But it‘s worth calling out specifically the so-called service discovery networks like docker-compose
networks or swarm mode overlay networks.
These networks provide built-in service discovery between containers using DNS so expose
may seem attractive.
However, I still recommend relying always on ports
publishing for stability across use cases rather than assuming container resolution will continually work:
ports:
- 127.0.0.1:3306:3306
This exposes over local host explicitly while still benefiting from service discovery. Such certainty reduces environment specific cases you need to account for as you develop cloud-native apps.
Security Implications of Publishing Ports
Opening up ports via ports
definitely exposes some attack surface area that should be minimized. But the practice of hiding ports via expose
itself does not provide substantial security in my opinion.
For one, exposed ports are still open for attacks originating from containers running in the same app space, which could be compromised themselves.
Instead, applying system security best practices provide real protection:
- Use containers with required privileges only rather than root
- Limit capabilities via Drocap or PodSecurityPolicies
- Follow least privilege IAM roles and service accounts
- Scan images for vulnerabilities regularly
- Utilize read-only filesystems where possible
- Secure secrets with tools like HashiVault
What expose
does provide is a clean way to indicate internally accessible ports in self-documenting Docker compose files.
But beyond that, modern infrastructure security requires much more advanced boundary control and traversal tracking among services, hosts, geographic zones.
Putting It Together: Hybrid Exposure Patterns
In this section, let‘s look at some examples of applying selective port exposure in real-world applications.
Here is a snippet from a compose file for a multi-tier web application:
services:
lb:
image: nginx
ports:
- 80:80
- 443:443
web:
image: app
environment:
PORT: 8080
DB_HOST: db
expose:
- 8080
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: dbpass
ports:
- 127.0.0.1:3306:3306
Breaking this down:
- Load balancer exposes external ports 80 and 443 to public
- App container keeps port 8080 internal only, hidden from outside
- DB publishes 3306 port privately to custom host IP
With this per-service port segmentation, external exposure is minimized while still maintaining inter-container connectivity between the services over the docker compose network.
If published ports must be exposed publicly, consider attaching an ACL to allow only specific IP ranges rather than 0.0.0.0 wildcards.
For even greater security, utilize MACVLAN driver networks to directly attach containers to VLANs and apply security groups on Amazon VPC or subnets. This achieves proper network security segmentation not reliant on container port exposure methods.
Wrapping Up: Treat Container Ports with Care
I hope this detailed analysis offers a number of best practices and real-world advice when leveraging expose
and ports
for container port exposure.
The guidelines I suggest following are:
- Minimize external ports published with
ports
to just essential entrypoints - Rely on internal
ports
overexpose
for stability - Combine with proper network security and firewalling
Finding the right balance allows optimizing for both access requirements and risks threats that public container ports invite. Treat container ports carefully with proper segmentation so development pace or security do not end up compromised.
Implementing these Docker and container networking best practices from an expert lens arms you to build resilient, cloud-native applications.