04 Jan 2026 - tsp
Last update 04 Jan 2026
13 mins
Modern backend architectures often consist of many small services: ASGI applications, REST APIs, internal dashboards, background workers, metrics endpoints, and administration interfaces. In practice, these services are almost always deployed behind a reverse proxy such as nginx or Apache httpd, which handles TLS termination, routing, authentication, and sometimes rate limiting.
Despite this, it is still very common to expose each backend service via a TCP socket, even when all components run on the same machine. Typical examples include binding services to 127.0.0.1:8000 or similar high ports. Those high ports are usually chosen because they are traditionally available for user accounts in contrast to port numbers below 1024 which are traditionally only bindable for the root user, which is the reason many applications even as of today launch as root, bind the port, drop their privileges and impersonate a less privileged user though there are though this is perfectly controllable via modules like FreeBSDs mac_portacl, which is not seen in the wild very often due to the additional configuration overhead. Of course there are some flaws to this approach like any user being able to bind to such a high port number in case it’s not already occupied for example.
This article argues that, for purely local backend services, TCP is not the best choice. Unix domain sockets (UDS) or their MS Windows equivalent of named pipes provide stronger security guarantees, simpler access control, better isolation, better performance and cleaner system design - with virtually no downsides in this deployment scenario.

Before discussing socket choices, it is worth briefly revisiting why reverse proxies are used in the first place.
A reverse proxy typically provides:
This design deliberately concentrates all externally visible complexity into a single, hardened component. Backend services are then free to focus on application logic instead of security policy, certificate management, or network hardening. They do only have to implement basic HTTP, they can be asured that the incoming HTTP requests are sane from the reverse proxy and do not exploit buffer overruns in their HTTP APIs, they do not have to take into account SSL/TLS logic which is hard to get right, they do not have to handle private keys and thus reduce attack service, etc.
A key architectural implication follows naturally:
Backend services are not network-facing services.
If a backend service:
then there is no technical reason to expose it via TCP. Unix domain sockets exist precisely to solve local inter-process communication problems. Using them here is not only an optimization trick; it is simply using the correct tool.
A Unix domain socket is addressed via the filesystem rather than via an IP address and port, for example:
/var/run/myapp/api.sock
Instead of:
127.0.0.1:8000
From the kernel’s perspective, this is pure local IPC, not networking; from a programmers perspective the API still looks as simple as any network communication API, there is no need to change major parts of your application.
Even when bound to 127.0.0.1, TCP sockets:
They remain subject to:
0.0.0.0 due to minor configuration errors or insane default configurations and thus exposing services to the outside world127.0.0.0/8 subnet and thus hit the default rule that allows all loopback traffic on most systems. This allows containers to access each other and the backend services without restrictions.Access control is indirect and error-prone, relying on:
The crucial point is:
A TCP service can accidentally become reachable.
This is not hypothetical; it is a frequent source of production incidents and often goes undetected till an incident.
With Unix domain sockets, access control becomes trivial and robust:
srw-rw---- www-data myapp /var/run/myapp/api.sock
Only processes running as the owning user or group can connect. There is:
If a process cannot open the socket file, it cannot communicate with the service. This aligns perfectly with the Unix security model and dramatically reduces the attack surface and enabled one to utilize the concept of unix groups and users for access control, which is usually well known even to less experienced system administrators.
Security gets simpler and more intuitive.
Unix domain sockets cannot be:
There is simply no mechanism by which they can become externally reachable. This removes an entire class of configuration errors and security incidents.
Even purely internal TCP services consume shared networking resources:
Under load or attack against the reverse proxy, these resources may be stressed, affecting all socket based applications. Unix domain sockets avoid all of this:
A reverse proxy can be attacked; the backend does not have to be.
TCP has a limited and globally shared namespace of ports. Unix domain sockets do not. They can be cleanly namespaced using directories:
/var/run/myapp/
├── api.sock
├── metrics.sock
├── admin.sock
└── worker.sock
This avoids port coordination, collisions, and unnecessary configuration complexity.
One major operational benefit of reverse proxies is centralized TLS handling. When backends are exposed via TCP, there is often a temptation to:
On one hand this makes sense since it mitigates attacks on network level. If an attacker gained access to your local network they cannot evesdrop data running in your own network. This may be a requirement depending on your scneario. But it also increases complexity and failure modes when it comes to key distribution, keeping certificates and private keys safe and managing them. There had been incidents where companies utilizes a single wildcard certificate with a single private key for all of their internal and external services. A single flaw in a backend service was able to leak keys and lead to major security incidents.
With Unix domain sockets:
The backend becomes simpler, smaller, and less error-prone.
Uvicorn supports Unix domain sockets natively. For example:
uvicorn myapp:app \
--uds /var/run/myapp/api.sock \
--workers 4
Permissions can then be set explicitly:
chown www-data:myapp /var/run/myapp/api.sock
chmod 660 /var/run/myapp/api.sock
The reverse proxy connects to the socket file directly, and the service is never exposed via TCP. To make the application accessible one can utilize an Apache reverse proxy utilizing mod_proxy for example:
<VirtualHost *:443>
ServerName example.com
SSLEngine on
SSLCertificateFile /etc/ssl/certs/example.pem
SSLCertificateKeyFile /etc/ssl/private/example.key
ProxyPreserveHost On
ProxyPass "/" "unix:/var/run/myapp/api.sock|http://localhost/"
ProxyPassReverse "/" "unix:/var/run/myapp/api.sock|http://localhost/"
</VirtualHost>
In this case the unix prefix specifies the socket path. The appended |http://localhost/ part defines the protocol semantics, not a real network connection or endpoint. It is utilized to generate the specific headers for the HTTP request.
Though I am a huge supporter of the microservices dogma and even though Unix domain sockets are often associated with microservices discussions, which can give the false impression that this approach is only relevant in large, distributed systems, one has to note that this is not the case.
This pattern is equally valuable for:
JSON APIsIn fact, monoliths often benefit more from Unix domain sockets, because they tend to accumulate many internal endpoints over time without getting distributed to multiple machines, like microservices would be. Using TCP for each of them leads to:
Using Unix domain sockets instead makes the internal structure explicit: these components are part of the same system, not independent network services.
The distinction is architectural, not ideological.
Unix domain sockets only work when services run on the same machine. Therefore, well-designed services should support:
This is not a limitation but a design requirement for flexible services.
For a long time JavaEE has been the synonymous for professional bussines applications and highly specialized containers like Apache Tomcat still provide rock solid and satble Java Servlet containers for those applications. Unfortunately up to this day they do not support Unix Domain Sockets - neither for web applications nor for their administrative services. It would indeed have been highly desireable if classic Java application servers had first-class support for Unix domain sockets. However their absence is not the result of a single oversight or technical inability. It’s the outcome of several historical and architectural forces that shaped the Java ecosystem.
First, historical timing matters. Java’s core networking APIs were designed in the mid 1990s, at a time when Unix domain sockets simply were not standardized, poorly documented across Unix variants, and entirely absent on Windows. Java’s founding promise and main advantage - “write once, run anywhere”- of course strongly discouraged reliance on operating system specific IPC mechanisms. As a result, the JVM deliberately abstracted networking (Socket, ServerSocket), but not local IPC.
Second, Java EE assumed a networked deployment model by default. Early Java application servers were designed for an enterprise world of physically separate machines, hardware load balancers, and clustered deployments. In that context, exposing services via TCP was not considered a risk but the expected operational norm. The modern pattern of a reverse proxy and multiple backend services co‑located on the same host only became dominant much later.
Third, portability was consistently prioritized over operating system (OS) specific correctness. Supporting Unix domain sockets would have meant introducing divergent behavior between Unix‑like systems and Windows, breaking the illusion of a uniform runtime. While modern runtimes such as Python and Go eventually embraced OS specific optimizations, enterprise Java largely maintained its strict portability stance up until today.
Finally, by the time the need became obvious, JavaEE was already ossified. When reverse proxies, containerization, and same‑host service graphs became commonplace, the JavaEE ecosystem was mature, conservative, and slow to evolve (which is actually one of it’s strengths). The problem was instead solved externally - through nginx as a reverse proxy for network sockets, firewalls, and network policies - rather than by extending the application servers themselves.
Importantly, this is not a failure of Java as a language nor a failure of the servlet containers. It is a consequence of its early success, its long‑standing design goals, it’s rock solid stability and portability and the operational assumptions of the era in which JavaEE was created.
If a backend service:
then exposing it via TCP is unnecessary exposure. Using Unix domain sockets:
This article is tagged:
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/