4

According to this article, nginx has a master process and a number of worker processes. I'm trying to understand how a request is handled by the nginx worker processes. nginx uses an event driven architecture with multiple listen sockets and connection sockets.

Typically with an HTTP web server you'd have a single process listening on port 80. For a new connection the data for all requests would then go to port 80 via a socket e.g. (client-ip, client-port, server-ip, 80) where 80 is the server port. As I understand it you can only have a single process listening on a single port, so how exactly do these requests get forwarded to all the other ports nginx uses? Does the master process copy all the request and response data from local ports back and forth via port 80?

Thanks.

Eric Conner
  • 161
  • 4
  • You can copy data using interprocess communication, shared memory, file system, etc. You can have your child processes use a different port while the master process uses the advertised port. I'm not an NGINX developer though. – Berin Loritsch Sep 13 '17 at 20:17
  • 1
    Take a look [here](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/) – Laiv Sep 13 '17 at 20:24
  • @Laiv, thanks that's the article I already referenced above. – Eric Conner Sep 14 '17 at 19:53
  • @Berin, thanks. I guess the data is somehow communicated from the master process to the worker processes then..it seems to happen via sockets but I'm still unclear exactly how. – Eric Conner Sep 14 '17 at 19:54

1 Answers1

3

Each NGINX worker process is initialized with the NGINX configuration and is provided with a set of listen sockets by the master process.

The NGINX worker processes begin by waiting for events on the listen sockets - Inside Nginx Architecture

worker processes accept new requests from a shared "listen" socket - The architecture of open source projects

Basically only the master binds to port 80 (or whatever ports are configured). It opens a bunch of unix domain sockets, and shares the sockets with the worker processes. So the worker processes just spin in a loop accepting connections from the shared sockets and handling them. The special thing about unix domain sockets is they can be used to share open file descriptions (i.e. sockets) accross processes.

AFAIK this isn't the master "copying" requests for the worker. The HTTP connection is opened directly by the worker process from the socket and the request is read.

Samuel
  • 9,137
  • 1
  • 25
  • 42
  • Thanks Samuel. So the way I've always thought about sockets is as a 4-tuple (client-ip, client-port, server-ip, server-port) corresponding to a file descriptor on the client & server. I understand your point that the master process can share unix domain sockets with worker processes, but it seems to me that data would come in at the file descriptor for port 80 and then would somehow need to get transferred to the workers' unix domain sockets -- which are separate file descriptors. So I guess the data gets copied from the file descriptor for port 80 to the others? Is this correct? – Eric Conner Sep 14 '17 at 19:57
  • TBH I'm not sure – Samuel Sep 14 '17 at 20:05
  • the client is handed a new different socket from the accept() call. – James Nov 12 '17 at 22:15