Backup your original configs and you can start reconfigure your configs. You will need to open your nginx.conf at /etc/nginx/nginx.conf with your favorite editor.
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than thatworker_processes auto; #some last versions calculate it automatically# number of file descriptors used for nginx# the limit for the maximum FDs on the server is usually set by the OS.# if you don't set FD's then OS settings will be used which is by default 2000worker_rlimit_nofile100000;
# only log critical errorserror_log /var/log/nginx/error.log crit;
# provides the configuration file context in which the directives that affect connection processing are specified.events {
# determines how much clients will be served per worker# max clients = worker_connections * worker_processes# max clients is also limited by the number of socket connections available on the system (~64k)worker_connections4000;
# optmized to serve many clients with each thread, essential for linux -- for testing environmentuse epoll;
# accept as many connections as possible, may flood worker connections if set too low -- for testing environmentmulti_accept on;
}
# cache informations about FDs, frequently accessed files# can boost performance, but you need to test those valuesopen_file_cache max=200000 inactive=20s;
open_file_cache_valid30s;
open_file_cache_min_uses2;
open_file_cache_errors on;
# to boost I/O on HDD we can disable access logsaccess_log off;
# copies data between one FD and other from within the kernel# faster then read() + write()sendfile on;
# send headers in one peace, its better then sending them one by one tcp_nopush on;
# don't buffer data sent, good for small data bursts in real timetcp_nodelay on;
# reduce the data that needs to be sent over networkgzip on;
gzip_min_length10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
gzip_disable msie6;
# allow the server to close connection on non responding client, this will free up memoryreset_timedout_connection on;
# request timed out -- default 60client_body_timeout10;
# if client stop responding, free up memory -- default 60send_timeout2;
# server will close connection after this time -- default 75keepalive_timeout30;
# number of requests client can make over keep-alive -- for testing environmentkeepalive_requests100000;
Now you can save config and run bottom command
nginx -s reload
/etc/init.d/nginx start|restart
If you wish to test config first you can run
nginx -t
/etc/init.d/nginx configtest
Just For Security Reason
server_tokens off;
Nginx Simple DDoS Defense
This is far away from secure DDoS defense but can slow down some
small DDoS. Those configs are also in test environment and you should do
your values.
# limit the number of connections per single IPlimit_conn_zone$binary_remote_addr zone=conn_limit_per_ip:10m;
# limit the number of requests for a given sessionlimit_req_zone$binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
# zone which we want to limit by upper values, we want limit whole serverserver {
limit_conn conn_limit_per_ip 10;
limit_req zone=req_limit_per_ip burst=10 nodelay;
}
# if the request body size is more than the buffer size, then the entire (or partial)# request body is written into a temporary fileclient_body_buffer_size128k;
# headerbuffer size for the request header from client -- for testing environmentclient_header_buffer_size3m;
# maximum number and size of buffers for large headers to read from client requestlarge_client_header_buffers4256k;
# read timeout for the request body from client -- for testing environmentclient_body_timeout3m;
# how long to wait for the client to send a request header -- for testing environmentclient_header_timeout3m;
Now you can do again test config
nginx -t
/etc/init.d/nginx configtest
And then reload or restart your nginx
nginx -s reload
/etc/init.d/nginx restart|reload
You can test this configuration with tsung and when you are satisfied with result you can hit Ctrl+C because it can run for hours.
Using NGINX and NGINX Plus to Fight DDoS Attacks
NGINX and NGINX Plus have a number of features that – in conjunction
with the characteristics of a DDoS attack mentioned above – can make
them a valuable part of a DDoS attack mitigation solution. These
features address a DDoS attack both by regulating the incoming traffic
and by controlling the traffic as it is proxied to backend servers.
Limiting the Rate of Requests
You can limit the rate at which NGINX and NGINX Plus accept incoming
requests to a value typical for real users. For example, you might
decide that a real user accessing a login page can only make a request
every two seconds. You can configure NGINX and NGINX Plus to allow a
single client IP address to attempt to login only every 2 seconds
(equivalent to 30 requests per minute):
The limit_req_zone directive configures a shared memory zone called one to store the state of requests for the specified key, in this case the client IP address ($binary_remote_addr). The limit_req directive in the location block for /login.html references the shared memory zone.
Limiting the Number of Connections
You can limit the number of connections that can be opened by a
single client IP address, again to a value appropriate for real users.
For example, you can allow each client IP address to open no more than
10 connections to the /store area of your website:
The limit_conn_zone directive configures a shared memory zone called addr to store requests for the specified key, in this case (as in the previous example) the client IP address, $binary_remote_addr. The limit_conn directive in the location block for /store references the shared memory zone and sets a maximum of 10 connections from each client IP address.
Closing Slow Connections
You can close connections that are writing data too infrequently,
which can represent an attempt to keep connections open as long as
possible (thus reducing the server’s ability to accept new connections).
Slowloris is an example of this type of attack. The client_body_timeout directive controls how long NGINX waits between writes of the client body, and the client_header_timeout directive
controls how long NGINX waits between writes of client headers. The
default for both directives is 60 seconds. This example configures NGINX
to wait no more than 5 seconds between writes from the client for
either headers or body:
server {
client_body_timeout 5s;
client_header_timeout 5s;
...
}
Blacklisting IP Addresses
If you can identify the client IP addresses being used for an attack, you can blacklist them with the deny
directive so that NGINX and NGINX Plus do not accept their connections
or requests. For example, if you have determined that the attacks are
coming from the address range 123.123.123.1 through 123.123.123.16:
location / {
deny 123.123.123.0/28;
...
}
Or if you have determined that an attack is coming from client IP addresses 123.123.123.3, 123.123.123.5, and 123.123.123.7:
If access to your website or application is allowed only from one or
more specific sets or ranges of client IP addresses, you can use the allow and deny
directives together to allow only those addresses to access the site or
application. For example, you can restrict access to only addresses in a
specific local network:
Here, the deny all directive blocks all client IP addresses that are not in the range specified by the allow directive.
Using Caching to Smooth Traffic Spikes
You can configure NGINX and NGINX Plus to absorb much of the traffic
spike that results from an attack, by enabling caching and setting
certain caching parameters to offload requests from the back end. Some
of the helpful settings are:
The updating parameter to the proxy_cache_use_stale
directive tells NGINX that when it needs to fetch an update of a stale
cached object, it should send just one request for the update, and
continue to serve the stale object to clients who request it during the
time it takes to receive the update from the backend server. When
repeated requests for a certain file are part of an attack, this
dramatically reduces the number of requests to the backend servers.
The key defined by the proxy_cache_key directive usually consists of embedded variables (the default key, $scheme$proxy_host$request_uri, has three variables). If the value includes the $query_string variable, then an attack that sends random query strings can cause excessive caching. We recommend that you don’t include the $query_string variable in the key unless you have a particular reason to do so.
Blocking Requests
You can configure NGINX or NGINX Plus to block several kinds of requests:
Requests to a specific URL that seems to be targeted
Requests in which the User-Agent header is set to a value that does not correspond to normal client traffic
Requests in which the Referer header is set to a value that can be associated with an attack
Requests in which other headers have values that can be associated with an attack
For example, if you determine that a DDoS attack is targeting the URL /foo.php you can block all requests for the page:
location /foo.php {
deny all;
}
Or if you discover that DDoS attack requests have a User-Agent header value of foo or bar, you can block those requests.
The http_name variable references a request header, in the above example the User-Agent header. A similar approach can be used with other headers that have values that can be used to identify an attack.
Limiting the Connections to Backend Servers
An NGINX or NGINX Plus instance can usually handle many more
simultaneous connections than the backend servers it is load balancing.
With NGINX Plus, you can limit the number of connections to each backend
server. For example, if you want to limit NGINX Plus to establishing no
more than 200 connections to each of the two backend servers in the website upstream group:
upstream website {
server 192.168.100.1:80 max_conns=200;
server 192.168.100.2:80 max_conns=200;
queue 10 timeout=30s;
}
The max_conns parameter applied to each server specifies the maximum number of connections that NGINX Plus opens to it. The queue directive
limits the number of requests queued when all the servers in the
upstream group have reached their connection limit, and the timeout parameter specifies how long to retain a request in the queue.
Dealing with Range-Based Attacks
One method of attack is to send a Range header with a
very large value, which can cause a buffer overflow. For an discussion
of how to use NGINX and NGINX Plus to mitigate this type of attack in a
sample case, see Using NGINX and NGINX Plus to Protect Against CVE-2015-1635.
Handling High Loads
DDoS attacks usually result in a high traffic load. For tips on
tuning NGINX or NGINX Plus and the operating system to allow the system
to handle higher loads, see Tuning NGINX for Performance.
Identifying a DDoS Attack
So far we have focused on what you can use NGINX and NGINX Plus to
help alleviate the effects of a DDoS attack. But how can NGINX or
NGINX Plus help you spot a DDoS attack? The NGINX Plus Status module provides
detailed metrics about the traffic that is being load balanced to
backend servers, which you can use to spot unusual traffic patterns.
NGINX Plus comes with a status dashboard web page that graphically
depicts the current state of the NGINX Plus system (see the example at demo.nginx.com).
The same metrics are also available through an API, which you can use
to feed the metrics into custom or third-party monitoring systems where
you can do historical trend analysis to spot abnormal patterns and
enable alerting.
Summary
NGINX and NGINX Plus can be used as a valuable part of a DDoS
mitigation solution, and NGINX Plus provides additional features for
protecting against DDoS attacks and helping to identify when they are
occurring.
Không có nhận xét nào:
Đăng nhận xét