Thứ Hai, 14 tháng 1, 2019

mysql multiple instances on CentOS 7

mkdir -p /data/db/mysql
./mysql_install_db --datadir=/data/db/mysql/mysql-instance-01
./mysql_install_db --datadir=/data/db/mysql/mysql-instance-02
./mysql_install_db --datadir=/data/db/mysql/mysql-instance-03
./mysql_install_db --datadir=/data/db/mysql/mysql-instance-04
cp /usr/share/mysql/my-medium.cnf /data/db/mysql/mysql-instance-01/my.cnf
cp /usr/share/mysql/my-medium.cnf /data/db/mysql/mysql-instance-02/my.cnf
cp /usr/share/mysql/my-medium.cnf /data/db/mysql/mysql-instance-03/my.cnf
cp /usr/share/mysql/my-medium.cnf /data/db/mysql/mysql-instance-04/my.cnf
vi /data/db/mysql/mysql-instance-01/my.cnf
[mysqld]
port = 44001
socket = /var/lib/mysql/mysql-instance-01.sock
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
innodb_file_per_table
datadir = /data/db/mysql/mysql-instance-01/
log-error = /data/db/mysql/mysql-instance-01/mysql-instance-01.log
innodb_data_home_dir = /data/db/mysql/mysql-instance-01/
innodb_log_group_home_dir = /data/db/mysql/mysql-instance-01/
chown -R mysql:mysql /data/db/mysql
vim /usr/lib/systemd/system/mariadb01.service
[Unit]
Description=MariaDB database server instance-01
After=syslog.target
After=network.target
[Service]
Type=simple
User=mysql
Group=mysql
ExecStart=/usr/bin/mysqld_safe --defaults-file=/data/db/mysql/mysql-instance-01/my.cnf --user=mysql --basedir=/usr
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=300
# Place temp files in a secure directory, not /tmp
PrivateTmp=true
[Install]
WantedBy=multi-user.target
systemctl start mariadb01
systemctl enable mariadb01
Connect command:
mysql -P 44001 -h 127.0.0.1 -u root

Thứ Hai, 7 tháng 1, 2019

iptables - NAT

*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [78:15453]

-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT


-A INPUT -s XXXXXX -j ACCEPT


#-A INPUT -p tcp -m state --state NEW -m tcp --dport 53714 -j ACCEPT
#-A INPUT -p tcp -m state --state NEW -m tcp --dport 3389 -j ACCEPT


-A FORWARD -i eno16777984 -o eno33557248 -p tcp -m tcp --dport 3389 --tcp-flags FIN,SYN,RST,ACK SYN -m conntrack --ctstate NEW -j ACCEPT
-A FORWARD -i eno16777984 -o eno33557248 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eno33557248 -o eno16777984 -m conntrack --ctstate NEW,RELATED,ESTABLISHED -j ACCEPT

-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited



COMMIT



*nat
:PREROUTING ACCEPT [69:4625]
:INPUT DROP [1:120]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -i eno16777984 -p tcp -m tcp --dport 3389 -j DNAT --to-destination 10.10.10.10
-A POSTROUTING ! -d 10.10.10.0/24 -o eno16777984 -j SNAT --to-source YYYYYYY
-A POSTROUTING ! -d 10.10.10.0/24 -o eno16777984 -j MASQUERADE

-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 53714 -j ACCEPT



COMMIT

Thứ Tư, 2 tháng 1, 2019

FreeBSD TCP Performance Tuning

To enable RFC 1323 Window Scaling and increase the TCP window size to 1 MB on FreeBSD, add the following lines to /etc/sysctl.conf and reboot.
net.inet.tcp.rfc1323=1
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendspace=1048576
net.inet.tcp.recvspace=1048576
You can make these changes on the fly via the sysctl command. As always, the '$' represents the shell prompt and should not be typed.
$ sudo sysctl net.inet.tcp.rfc1323=1
$ sudo sysctl kern.ipc.maxsockbuf=16777216
$ sudo sysctl net.inet.tcp.sendspace=1048576
$ sudo sysctl net.inet.tcp.recvspace=1048576
In addition, FreeBSD may have a low number of network memory buffers (mbufs) by default. You can view the current mbuf configuration by running netstat -m. If your mbuf value is too low, it may cause your system to become unresponsive to the network. Increase the number of mbufs by adding the line below to /boot/loader.conf and rebooting.
kern.ipc.nmbclusters="16384"

Tuning FreeBSD to serve 100-200 thousands of connections

I also use nginx as reverse-proxy and load balancer in my project.

mbuf clusters

FreeBSD stores the network data in the mbuf clusters 2Kb each, but only 1500B are used in each cluster (the size of the Ethernet packet)

mbufs

FreeBSD - How to reduce TIME_WAIT connections

Routinely, I did a "netstat -an" on a FreeBSD box, a DNS server. The screen then shower with hundreds of "TIME_WAIT" connections. Seems like some malware infected clients are querying the server and causes the terminated TCP socket waiting to be shutdown, but not fast enough, to be efficient. Fortunately, the numbers of TIME_WAIT sockets accumulated are insignificant. 

In order to reduce the number of socket waiting, tune the system value :

net.inet.tcp.msl


to a shorter time. By default, TIME_WAIT status connections will have to wait for at least 60 seconds (if no reply from the destination that this connection can be terminated) to terminate the connection. This value is based on the RFC 793. But the problem is the RFC was drafted at year 1981. IMHO, the equipments & bandwidth of that time wasn't as fast as the current one. Which means 60 seconds of waiting, an inadequate long time.

The formula to calculate the value (net.inet.tcp.msl) to time of seconds is 2 times of the net.inet.tcp.msl value. which means the value of net.inet.tcp.msl with 30000 means 60000ms (because 2x30000), thus 60 seconds. In order set net.inet.tcp.msl to 15 seconds, change the value of net.inet.tcp.msl to 7500.
E.g.
sysctl net.inet.tcp.msl=7500

This will cause the TIME_WAIT sockets to terminate after waiting for 15 seconds, if no reply from the destination that this connection can be terminated. 

For more info, refer to RFC 793 (search for "Maximum Segment Lifetime").

TCP states

Most of the 11 TCP states are pretty easy to understand and most programmers know what they mean:
  • CLOSED: There is no connection.
  • LISTEN: The local end-point is waiting for a connection request from a remote end-point i.e. a passive open was performed.
  • SYN-SENT: The first step of the three-way connection handshake was performed. A connection request has been sent to a remote end-point i.e. an active open was performed.
  • SYN-RECEIVED: The second step of the three-way connection handshake was performed. An acknowledgement for the received connection request as well as a connection request has been sent to the remote end-point.
  • ESTABLISHED: The third step of the three-way connection handshake was performed. The connection is open.
  • FIN-WAIT-1: The first step of an active close (four-way handshake) was performed. The local end-point has sent a connection termination request to the remote end-point.
  • CLOSE-WAIT: The local end-point has received a connection termination request and acknowledged it e.g. a passive close has been performed and the local end-point needs to perform an active close to leave this state.
  • FIN-WAIT-2: The remote end-point has sent an acknowledgement for the previously sent connection termination request. The local end-point waits for an active connection termination request from the remote end-point.
  • LAST-ACK: The local end-point has performed a passive close and has initiated an active close by sending a connection termination request to the remote end-point.
  • CLOSING: The local end-point is waiting for an acknowledgement for a connection termination request before going to the TIME-WAIT state.
  • TIME-WAIT: The local end-point waits for twice the maximum segment lifetime (MSL) to pass before going to CLOSED to be sure that the remote end-point received the acknowledgement.
Most people working with high-level programming languages actually only really know the states CLOSED, LISTEN and ESTABLISHED. Using netstat the chances are that you will not see connections in the SYN_SENT, SYN_RECV, FIN_WAIT_1, LAST_ACK or CLOSING states. A TCP end-point usually stays in these states for only a very short period of time and if many connections get stuck for a longer time in these states, something really bad happened.
FIN_WAIT_2, TIME_WAIT and CLOSE_WAIT are more common. They are all related to the connection termination four-way handshake. Here is a short overview of the states involved:


The upper part shows the states on the end-point initiating the termination. The lower part the states on the other end-point.
So the initiating end-point (i.e. the client) sends a termination request to the server and waits for an acknowledgement in state FIN-WAIT-1. The server sends an acknowledgement and goes in state CLOSE_WAIT. The client goes into FIN-WAIT-2 when the acknowledgement is received and waits for an active close. When the server actively sends its own termination request, it goes into LAST-ACK and waits for an acknowledgement from the client. When the client receives the termination request from the server, it sends an acknowledgement and goes into TIME_WAIT and after some time into CLOSED. The server goes into CLOSED state once it receives the acknowledgement from the client.

FIN_WAIT_2

If many sockets which were connected to a specific remote application end up stuck in this state, it usually indicates that the remote application either always dies unexpectedly when in the CLOSE_WAIT state or just fails to perform an active close after the passive close.
The timeout for sockets in the FIN-WAIT-2 state is defined with the parameter tcp_fin_timeout. You should set it to value high enough so that if the remote end-point is going to perform an active close, it will have time to do it. On the other hand sockets in this state do use some memory (even though not much) and this could lead to a memory overflow if too many sockets are stuck in this state for too long.

TIME_WAIT

The TIME-WAIT state means that from the local end-point point of view, the connection is closed but we’re still waiting before accepting a new connection in order to prevent delayed duplicate packets from the previous connection from being accepted by the new connection.
In this state, TCP blocks any second connection between these address/port pairs until the TIME_WAIT state is exited after waiting for twice the maximum segment lifetime (MSL).

In most cases, seeing many TIME_WAIT connection doesn’t show any issue. You only have to start worrying when the number of TIME_WAIT connections cause performance problems or a memory overflow.

CLOSE_WAIT

If you see that connections related to a given process tend to always end up in the CLOSE_WAIT state, it means that this process does not perform an active close after the passive close. When you write a program communicating over TCP, you should detect when the connection was closed by the remote host and close the socket appropriately. If you fail to do this the socket will stay in the CLOSE_WAIT until the process itself disappears.
So basically, CLOSE_WAIT means the operating system knows that the remote application has closed the connection and waits for the local application to also do so. So you shouldn’t try and tune the TCP parameters to solve this but check the application owning the connection on the local host. Since there is no CLOSE_WAIT timeout, a connection can stay in this state forever (or at least until the program does eventually close the connection or the process exists or is killed).
If you cannot fix the application or have it fixed, the solution is to kill the process holding the connection open. Of course, there is still a risk of losing data since the local end-point may still send data it has in a buffer. Also, if many applications run in the same process (as it is the case for Java Enterprise applications), killing the owning process is not always an option.
I haven’t ever tried to force closing of a CLOSE_WAIT connection using tcpkill, killcx or cutter but if you can’t kill or restart the process holding the connection, it might be an option.