Thứ Sáu, 11 tháng 11, 2016

Tuning NGINX Configuration

The following are some NGINX directives that can impact performance. As stated above, we only discuss directives that are safe for you to adjust on your own. We recommend that you not change the settings of other directives without direction from the NGINX team.

Worker Processes

NGINX can run multiple worker processes, each capable of processing a large number of simultaneous connections. You can control the number of worker processes and how they handle connections with the following directives:

  • worker_processes – The number of NGINX worker processes (the default is 1). In most cases, running one worker process per CPU core works well, and we recommend setting this directive to auto to achieve that. There are times when you may want to increase this number, such as when the worker processes have to do a lot of disk I/O.
  • worker_connections – The maximum number of connections that each worker process can handle simultaneously. The default is 512, but most systems have enough resources to support a larger number. The appropriate setting depends on the size of the server and the nature of the traffic, and can be discovered through testing.

Keepalive Connections

Keepalive connections can have a major impact on performance by reducing the CPU and network overhead needed to open and close connections. NGINX terminates all client connections and creates separate and independent connections to the upstream servers. NGINX supports keepalives for both clients and upstream servers. The following directives relate to client keepalives:
  • keepalive_requests – The number of requests a client can make over a single keepalive connection. The default is 100, but a much higher value can be especially useful for testing with a load-generation tool, which generally sends a large number of requests from a single client.
  • keepalive_timeout – How long an idle keepalive connection remains open.
The following directive relates to upstream keepalives:
  • keepalive – The number of idle keepalive connections to an upstream server that remain open for each worker process. There is no default value.
To enable keepalive connections to upstream servers you must also include the following directives in the configuration:
proxy_http_version 1.1;
proxy_set_header Connection "";

Access Logging

Logging every request consumes both CPU and I/O cycles, and one way to reduce the impact is to enable access-log buffering. With buffering, instead of performing a separate write operation for each log entry, NGINX buffers a series of entries and writes them to the file together in a single operation.
To enable access-log buffering, include the buffer=size parameter to the access_log directive; NGINX writes the buffer contents to the log when the buffer reaches the size value. To have NGINX write the buffer after a specified amount of time, include the flush=time parameter. When both parameters are set, NGINX writes entries to the log file when the next log entry will not fit into the buffer or the entries in the buffer are older than the specified time, respectively. Log entries are also written when a worker process is reopening its log files or shutting down. To disable access logging completely, include the off parameter to the access_log directive.

Sendfile

The operating system’s sendfile() system call copies data from one file descriptor to another, often achieving zero-copy, which can speed up TCP data transfers. To enable NGINX to use it, include the sendfile directive in the http context or a server or location context. NGINX can then write cached or on-disk content down a socket without any context switching to user space, making the write extremely fast and consuming fewer CPU cycles. Note, however, that because data copied with sendfile() bypasses user space, it is not subject to the regular NGINX processing chain and filters that change content, such as gzip. When a configuration context includes both the sendfile directive and directives that activate a content-changing filter, NGINX automatically disables sendfile for that context.

Limits

You can set various limits that help prevent clients from consuming too many resources, which can adversely the performance of your system as well as user experience and security. The following are some of the relevant directives:
  • limit_conn and limit_conn_zone – Limit the number of client connections NGINX accepts, for example from a single IP address. Setting them can help prevent individual clients from opening too many connections and consuming more than their share of resources.
  • limit_rate – Limits the rate at which responses are transmitted to a client, per connection (so clients that open multiple connections can consume this amount of bandwidth for each connection). Setting a limit can prevent the system from being overloaded by certain clients, ensuring more even quality of service for all clients.
  • limit_req and limit_req_zone – Limit the rate of requests being processed by NGINX, which has the same benefits as setting limit_rate. They can also improve security, especially for login pages, by limiting the request rate to a value reasonable for human users but too slow for programs trying to overwhelm your application with requests (such as bots in a DDoS attack).
  • max_conns parameter to the server directive in an upstream configuration block – Sets the maximum number of simultaneous connections accepted by a server in an upstream group. Imposing a limit can help prevent the upstream servers from being overloaded. Setting the value to 0 (zero, the default) means there is no limit.
  • queue (NGINX Plus) – Creates a queue in which requests are placed when all the available servers in the upstream group have reached their max_conns limit. This directive sets the maximum number of requests in the queue and, optionally, the maximum time they wait (60 seconds by default) before an error is returned. Requests are not queued if you omit this directive.

Caching and Compression Can Improve Performance

Some additional features of NGINX that can be used to increase the performance of a web application don’t really fall under the heading of tuning, but are worth mentioning because their impact can be considerable. They include caching and compression.

Caching

By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically improve the response time to clients while at the same time dramatically reducing the load on the backend servers. Caching is a topic in its own right and we won’t try to cover it here. See NGINX Content Caching in the NGINX Plus Admin Guide.

Compression

Compressing responses sent to clients can greatly reduce their size, so they use less network bandwidth. Because compressing data consumes CPU resources, however, it is most useful when it’s really worthwhile to reduce bandwidth usage. It is important to note that you should not enable compression for objects that are already compressed, such as JPEG files. For more information, see Compression and Decompression in the NGINX Plus Admin Guide


Worker Modifications

The easiest thing to set in your configuration is the right number of workers and connections.

Worker Processes

In /etc/nginx/nginx.conf, set worker_processes 1; if you have a lower traffic site where Nginx, a database, and a web application all run on the same server.
If you have a higher traffic site or a dedicated instance for Nginx, set one worker per CPU core: worker_processes auto;
If you’d like to set this manually, you can utilize grep ^processor /proc/cpuinfo | wc -l to find the number of processes that the server can handle.

Worker Connections

The option worker_connections sets the maximum number of connections that can be processed at one time by each worker process. By default, the worker connection limit is 512, but many systems can handle more.
The appropriate sizing can be discovered through testing, as it is variable based on the type of traffic Nginx is handling. The system’s core limitations can also be find through using ulimit:
1
ulimit -n
It will output a number as a result:
1
65536
You can also set use epoll, a scalable I/O event notification mechanism to trigger on events and make sure that I/O is utilized to the best of its ability.
Lastly, you can utilize multi_accept in order for a worker to accept all new connections at one time.
The events function should look something like this when configured:
/etc/nginx/nginx.conf
1
2
3
4
5
events {
    worker_connections 66536;
    use epoll;
    multi_accept on;
}

HTTP and TCP Optimizations

Keep Alive

Keep alive allows for fewer reconnections from the browser.
  • keepalive_timeout and keepalive_requests control the keep alive settings.
  • sendfile optimizes serving static files from the file system, like logos.
  • tcp_nodelay allows Nginx to make TCP send multiple buffers as individual packets.
  • tcp_nopush optimizes the amount of data sent down the wire at once by activating the TCP_CORK option within the TCP stack. TCP_CORK blocks the data until the packet reaches the MSS, which is equal to the MTU minus the 40 or 60 bytes of the IP header.
/etc/nginx/nginx.conf
1
2
3
4
5
keepalive_timeout 65;
keepalive_requests 100000;
sendfile on;
tcp_nopush on;
tcp_nodelay on;

Buffer Size

Making tweaks to the buffer size can be advantageous. If the buffer sizes are too low, then Nginx will write to a temporary file. This will cause for excessive disk I/O.
  • client_body_buffer_size handles the client buffer size. Most client buffers are coming from POST method form submissions. 128k is normally a good choice for this setting.
  • client_max_body_size sets the max body buffer size. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client. For reference, browsers cannot correctly display 413 errors. Setting size to 0 disables checking of client request body size.
  • client_header_buffer_size handles the client header size. 1k is usually a sane choice for this by default.
  • large_client_header_buffers shows the maximum number and size of buffers for large client headers. 4 headers with 4k buffers should be sufficient here.
  • output_buffers sets the number and size of the buffers used for reading a response from a disk. If possible, the transmission of client data will be postponed until Nginx has at least the set size of bytes of data to send. The zero value disables postponing data transmission.
/etc/nginx/nginx.conf
1
2
3
4
5
6
client_body_buffer_size      128k;
client_max_body_size         10m;
client_header_buffer_size    1k;
large_client_header_buffers  4 4k;
output_buffers               1 32k;
postpone_output              1460;

Connection Queue

Some directives in the in the /etc/sysctl.conf file can be changed in order to set the size of a Linux queue for connections and buckets. Updating the net.core.somaxconn and net.ipv4.tcp_max_tw_buckets changes the size of the queue for connections waiting for acceptance by Nginx. If there are error messages in the kernel log, increase the value until errors stop.
/etc/sysctl.conf
1
2
net.core.somaxconn = 65536
net.ipv4.tcp_max_tw_buckets = 1440000
Packets can be buffered in the network card before being handed to the CPU by setting the max backlog with the net.core.netdev_max_backlog tag. Consult the network card documentation for advice on changing this value.

Timeouts

Timeouts can also drastically improve performance.
  • client_body_timeout sends directives for the time a server will wait for a body to be sent.
  • client_header_timeout sends directives for the time a server will wait for a header body to be sent. These directives are responsible for the time a server will wait for a client body or client header to be sent after request. If neither a body or header is sent, the server will issue a 408 error or Request time out.
  • sent_timeout specifies the response timeout to the client. This timeout does not apply to the entire transfer but, rather, only between two subsequent client-read operations. Thus, if the client has not read any data for this amount of time, then Nginx shuts down the connection.
/etc/nginx/nginx.conf
1
2
3
client_header_timeout  3m;
client_body_timeout    3m;
send_timeout           3m;

Static Asset Serving

If your site serves static assets (such as CSS/JavaScript/images), Nginx can cache these files for a short period of time. Adding this within your configuration block tells Nginx to cache 1000 files for 30 seconds, excluding any files that haven’t been accessed in 20 seconds, and only files that have 5 times or more. If you aren’t deploying frequently you can safely bump up these numbers higher.
/etc/nginx/nginx.conf
1
2
3
4
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
You can also cache via a particular location. Caching files for a long time is beneficial, especially if the files have a version control system delivered by the build process or CMS.
/etc/nginx/nginx.conf
1
2
3
location ~* .(woff|eot|ttf|svg|mp4|webm|jpg|jpeg|png|gif|ico|css|js)$ {
    expires 365d;
}

Gzipping Content

For content that is plain text, Nginx can use gzip compression to serve back these assets compressed to the client. Modern web browsers will accept gzip compression and this will shave bytes off of each request that comes in for plain text assets. The list below is a “safe” list of compressible content types; however, you only want to enable the content types that you are utilizing within your web application.
/etc/nginx/nginx.conf
1
2
3
4
gzip on;
gzip_min_length 1000;
gzip_types: text/html application/x-javascript text/css application/javascript text/javascript text/plain text/xml application/json application/vnd.ms-fontobject application/x-font-opentype application/x-font-truetype application/x-font-ttf application/xml font/eot font/opentype font/otf image/svg+xml image/vnd.microsoft.icon;
gzip_disable "MSIE [1-6]\.";

Filesystem Optimizations

These file system operations improve system memory management, and can be added in /etc/sysctl.conf.

Ephemeral Ports

When Nginx is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral, port.
The IPv4 local port range defines a port range value. A common setting is net.ipv4.ip_local_port_range 1024 65000.
The TCP FIN timeout belays the amount of time a port must be inactive before it can reused for another connection. The default is often 60 seconds, but can normally be safely reduced to 30 or even 15 seconds:
/etc/sysctl.conf
1
net.ipv4.tcp_fin_timeout 15

Scale TCP Window

The TCP window scale option is an option to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes. This TCP option, along with several others, is defined in IETF RFC 1323 which deals with long fat networks. It can be defined with the net.ipv4.tcp_window_scaling = 1 tag.

Backlog Packets Before Drop

The net.ipv4.tcp_max_syn_backlog determines a number of packets to keep in the backlog before the kernel starts dropping them. A sane value is net.ipv4.tcp_max_syn_backlog = 3240000.

Close connection on Missing Client Response

reset_timedout_connection on; allows the server to close the connection after a client stops responding. This frees up socket-associated memory.

File Descriptors

File descriptors are operating system resources used to handle things such as connections and open files. Nginx can use up to two file descriptors per connection. For example, if it is proxying, there is generally one file descriptor for the client connection and another for the connection to the proxied server, though this ratio is much lower if HTTP keep alives are used. For a system serving a large number of connections, these settings may need to be adjusted.
sys.fs.file_max defines the system wide limit for file descriptors. nofile defines the user file descriptor limit, set in the /etc/security/limits.conf file.
/etc/security/limits.conf
1
2
soft nofile 4096
hard nofile 4096

Error Logs

error_log logs/error.log warn; defines the location and the different severity levels written to the error log. Setting a certain log level will cause all messages from that log level and higher to be logged. For example, the default level error will cause error, crit, alert, and emerg messages to be logged. If this parameter is omitted, then error is used by default.
  • emerg: Emergency situations where the system is in an unusable state.
  • alert: Severe situation where action is needed promptly.
  • crit: Important problems that need to be addressed.
  • error: An Error has occurred. Something was unsuccessful.
  • warn: Something out of the ordinary happened, but not a cause for concern.
  • notice: Something normal, but worth noting has happened.
  • info: An informational message that might be nice to know.
  • debug: Debugging information that can be useful to pinpoint where a problem is occurring.
Access logs with the log_format directive to configure a format of logged messages, as well as the access_log directive to specify the location of the log and the format.
/etc/nginx/nginx.conf
1
2
3
4
5
6
7
http {
    log_format compression '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$gzip_ratio"';
    server {
        gzip on;
        access_log /spool/logs/nginx-access.log compression;
    }
}

Conditional Logging

Conditional logging can be completed if the system administrator only wants to log certain requests. The example below excludes logging for both 2XX and 3XX HTTP status codes:
/etc/nginx/nginx.conf
1
2
3
4
map $status $loggable {
    ~^[23]  0;
    default 1;
}

Turning off Logging Completely

Logging can be turned off completely if you have an alternative logging methodology or if you don’t care about logging any of the requests to the server. Turning off logging can be performed with the following server directives
/etc/nginx/nginx.conf
1
2
3
4
5
6
server {
    listen       80;
    server_name  example.com;
    access_log  off;
    error_log off;
}

Activity Monitoring

One can also set up activity monitoring to see JSON responses in real-time. With the following configuration, the webpage status.html located at /usr/share/nginx/html can be requested by the URL http://127.0.0.1/status.html.
You could also utilize Linode Longview in order to view these collections. Longview is a system level statistics collection and graphing service, powered by the Longview open source software agent that can be installed onto any Linux system. The Longview agent collects system statistics and sends them to Linode, where the data is stored and presented it in beautiful and meaningful ways.

Example Files

Several tweaks have now been made across three files to improve Nginx performance on your system. Full snippets of the files are included below.

sysctl.conf

/etc/sysctl.conf
1
2
3
4
5
6
net.core.somaxconn = 65536
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_window_scaling = 1 
net.ipv4.tcp_max_syn_backlog = 3240000

limits.conf

/etc/security/limits.conf
1
2
soft nofile 4096
hard nofile 4096

nginx.conf

nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
pid /var/run/nginx.pid;
worker_processes  2;
 
events {
    worker_connections   65536;
    use epoll;
    multi_accept on;
}
 
http {
    keepalive_timeout 65;
    keepalive_requests 100000;
    sendfile         on;
    tcp_nopush       on;
    tcp_nodelay      on;
  
    client_body_buffer_size    128k;
    client_max_body_size       10m;
    client_header_buffer_size    1k;
    large_client_header_buffers  4 4k;
    output_buffers   1 32k;
    postpone_output  1460;
  
    client_header_timeout  3m;
    client_body_timeout    3m;
    send_timeout           3m;
  
    open_file_cache max=1000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 5;
    open_file_cache_errors off;
  
    gzip on;
    gzip_min_length  1000;
    gzip_buffers     4 4k;
    gzip_types       text/html application/x-javascript text/css application/javascript text/javascript text/plain text/xml application/json application/vnd.ms-fontobject application/x-font-opentype application/x-font-truetype application/x-font-ttf application/xml font/eot font/opentype font/otf image/svg+xml image/vnd.microsoft.icon;
    gzip_disable "MSIE [1-6]\.";

    # [ debug | info | notice | warn | error | crit | alert | emerg ] 
    error_log  /var/log/nginx.error_log  warn;
  
    log_format main      '$remote_addr - $remote_user [$time_local]  '
      '"$request" $status $bytes_sent '
      '"$http_referer" "$http_user_agent" '
    '"$gzip_ratio"';

    log_format download  '$remote_addr - $remote_user [$time_local]  '
      '"$request" $status $bytes_sent '
      '"$http_referer" "$http_user_agent" '
    '"$http_range" "$sent_http_content_range"';
  
    map $status $loggable {
        ~^[23]  0;
        default 1;
    } 
  
    server {
        listen        127.0.0.1;
        server_name   127.0.0.1;
        root         /var/www/html;
        access_log   /var/log/nginx.access_log  main;
   
        location / {
            proxy_pass         http://127.0.0.1/;
            proxy_redirect     off;
            proxy_set_header   Host             $host;
            proxy_set_header   X-Real-IP        $remote_addr;
            proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_connect_timeout      90;
            proxy_send_timeout         90;
            proxy_read_timeout         90;
            proxy_buffer_size          4k;
            proxy_buffers              4 32k;
            proxy_busy_buffers_size    64k;
            proxy_temp_file_write_size 64k;
            proxy_temp_path            /etc/nginx/proxy_temp;
        }
   
        location ~* .(woff|eot|ttf|svg|mp4|webm|jpg|jpeg|png|gif|ico|css|js)$ {
            expires 365d;
        }
    }
}

Không có nhận xét nào:

Đăng nhận xét