A worker is the unit that handles events in nginx. Deploying multiple workers appropriately will greatly increase the performance of the application.
To check how many workers nginx has, use the command
sudo systemctl status nginx
Result
# ● nginx.service - A high performance web server and a reverse proxy server
# Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
# Active: active (running) since Sun 2021-04-25 08:33:11 UTC; 5h 45min ago
# Docs: man:nginx(8)
# Main PID: 3904 (nginx)
# Tasks: 2 (limit: 1136)
# Memory: 3.2M
# CGroup: /system.slice/nginx.service
# ├─ 3904 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
# └─16443 nginx: worker process
Here, we can see that there is 1 worker operating in nginx.
The number of workers can be changed with the following config
worker_processes 2;
events {
}
http {
server {
listen 80;
server_name nginx-handbook.test;
return 200 "worker processes and worker connections configuration!\n";
}
}
Here, we are using the worker_process directive. With this attribute, we can determine the number of workers operating within nginx.
Check the number of workers again with the following command:
sudo systemctl status nginx
Result
# ● nginx.service - A high performance web server and a reverse proxy server
# Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
# Active: active (running) since Sun 2021-04-25 08:33:11 UTC; 5h 54min ago
# Docs: man:nginx(8)
# Process: 22610 ExecReload=/usr/sbin/nginx -g daemon on; master_process on; -s reload (code=exited, status=0/SUCCESS)
# Main PID: 3904 (nginx)
# Tasks: 3 (limit: 1136)
# Memory: 3.7M
# CGroup: /system.slice/nginx.service
# ├─ 3904 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
# ├─22611 nginx: worker process
# └─22612 nginx: worker process
Now we can see that there are 2 workers operating on the nginx system.
The setup is very easy, so how many worker servers should be set up for nginx.
If we set up 1 worker, of course, this worker will occupy the entire 100% of the system process. And if we set up 2 workers, each worker will occupy 50% of the system process. So increasing the number of workers does not mean increasing server performance.
The simplest way to know the appropriate number of workers is based on the number of server cores. For example, if the server is dual-core, then setting up 2 workers is reasonable, and if the server is quad-core, then setting up 4 workers is appropriate.
To check the number of cores on a Linux server, use the following command
nproc
Result
# 1
Here we see that we have 1 core, which corresponds to having 1 worker. The problem will arise if we upgrade the server to 2 cores, then we have to modify the config and it will be risky because we may forget to set it up.
To fix the above case, perform the following config
worker_processes auto;
events {
}
http {
server {
listen 80;
server_name nglearns.test;
return 200 "worker processes and worker connections configuration!\n";
}
}
By setting worker_processes to auto, we will allow nginx to determine the appropriate number of workers for the application.
Next, we will need to optimize how many connections a worker will handle. To find out, use the following command
ulimit -n
Result
# 1024
And now we can easily configure the worker, refer to the following example
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name nglearns.test;
return 200 "worker processes and worker connections configuration!\n";
}
}
By using context events and directive worker_connections, we can determine the exact number of connections for the worker.