Chủ Nhật, 18 tháng 12, 2016

Analyzing DDoS Attack with Nginx log

In this small post I would like to show a few useful commands to use if someone is experiencing a DDoS attack. In my case, there is an nginx as a front-end server. The access log format looks like this:

log_format main '$remote_addr — $remote_user [$time_local] "$host" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" -> $upstream_response_time';

In the log file we’ll see something like this:

188.142.8.61 — - [14/Sep/2014:22:51:03 +0400] «www.mysite.com» «GET / HTTP/1.1» 200 519 «kiloccnp.com.vn/» «Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.191602; .NET CLR 3.5.191602; .NET CLR 3.0.191602» "-" -> 0.003

Thứ Sáu, 16 tháng 12, 2016

Sử dụng log Apache để phân tích DDos

Nguồn : http://pastebin.com/raw/MLHtJ7fQ

[root@kiloccnp~]# cat kilo.txt  | cut -d ' ' -f 9 | sort | uniq -c | sort -nr
    698 404
    691 HTTP/1.1"
    168 HTTP/1.0"
     27 403
[root@kiloccnp~]# grep " 404 " kilo.txt  | cut -d ' ' -f 7 | sort | uniq -c | sort -nr
    674 /
    672 "POST
     23 //
      1 /balancer?&data=
[root@kiloccnp~]# grep " 404 " kilo.txt   | cut -d '"' -f 6 | sort | uniq -c | sort -nr
    136 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)
    132 Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)
    131 Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)
    118 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1
    117 Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)
    113 Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51
    109 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)
    106 Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)
    102 Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/20090913 Firefox/3.5.3
     98 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)
     94 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)
     92 Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1
     21 Mozilla/5.0 (KHTML, like Gecko) Safari/537.36
      1 MyClient/1.0

Thứ Tư, 14 tháng 12, 2016

Install MySQL Enterprise Monitor

1 . Install MySQL Monitor center

[root@Kilo-MySQL-Monitor ~]# unzip V790192-01-MySQL_Enterprise_Monitor_Service_Manager_3.3.1.zip
Archive:  V790192-01-MySQL_Enterprise_Monitor_Service_Manager_3.3.1.zip
 inflating: mysqlmonitor-3.3.1.1112-linux-x86_64-installer.bin  
 inflating: mysqlmonitor-3.3.1.1112-linux-x86_64-update-installer.bin  
 inflating: README_en.txt           
 inflating: READ_ME_ja.txt         
[root@Kilo-MySQL-Monitor ~]# chmod +x mysqlmonitor-3.3.1.1112-linux-x86_64-installer.bin
[root@Kilo-MySQL-Monitor ~]# ./mysqlmonitor-3.3.1.1112-linux-x86_64-installer.bin --mode text
Language Selection

MySQL Enterprise Backup & Restore - Full and Incrimental

1. Create bash script 


#!/bin/bash

#creates and maintains MySQL Enterprise Backup (MEB) backups

#prints usage
usage()
{
    echo "$VERSION"
    echo "
Usage: `basename $0` [command] [MEB options]

Commands:
    full                                make full backup
    incremental                         make incremental backup
    incremental-with-redo-log-only      make incremental backup with redo log only
    verify-to-tape                      verify backup images, then copy to tape
    prepare                             prepare backups
    remove-old                          remove old backups
    "
}

Thứ Ba, 13 tháng 12, 2016

Full Backup using MySQL Enterprise Backup

Install the software

Before we can configure backups and the like, you’ll need to install the MySQL Enterprise Backup software:
$ tar xvzf meb-3.9.0-linux2.6-x86-64bit.tar.gz 
meb-3.9.0-linux2.6-x86-64bit/
meb-3.9.0-linux2.6-x86-64bit/bin/
meb-3.9.0-linux2.6-x86-64bit/bin/mysqlbackup
meb-3.9.0-linux2.6-x86-64bit/README.txt
meb-3.9.0-linux2.6-x86-64bit/LICENSE.mysql
meb-3.9.0-linux2.6-x86-64bit/manual.html
meb-3.9.0-linux2.6-x86-64bit/mysql-html.css
I then placed the mysqlbackup binary in my MySQL “bin” directory :
$ cp meb-3.9.0-linux2.6-x86-64bit/bin/mysqlbackup /usr/local/mysql/bin/
$ which mysqlbackup 
/usr/local/mysql/bin/mysqlbackup
Now we’ve installed the software, we can go on and prepare our database for backup.

How to Back Up MySQL Server using MySQL Enterprise Backup (MEB)

 In order to be able to use MySQL Enterprise Backup to perform a backup of a MySQL Server instance, the following MySQL options must be specified in the configuration file of the server instance, under the [mysqld] section (substitute ... with any valid values):

datadir=...
innodb_data_home_dir=...
innodb_data_file_path=...
innodb_log_group_home_dir=...
innodb_log_files_in_group=...
innodb_log_file_size=...


Alternately you can use a new configuration file, that contains just the options above, and pass it to MySQL Enterprise Backup.

My.cnf examples

[root@kiloccnp ~]# cat /etc/my.cnf
# On Linux you can copy this file to /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options
# (@localstatedir@ for this installation) or to
# ~/.my.cnf to set user-specific options.

[mysqld]
datadir=/usr/local/mysql/data

#tmpdir=/var/log/mysqld/
tmpdir=/db/mytmp
log-error=/var/log/mysqld/mysqld.err

# as of MySQL 5.1.29, log-slow-queries is deprecated, use the 2 options below
#log-slow-queries=/var/log/mysqld/mysqld-slow.log
slow-query-log=1
slow-query-log-file=/var/log/mysqld/mysqld-slow.log
performance_schema_consumer_events_statements_history_long = ON

How to Automate Backups on Linux/UNIX Using MySQL Enterprise Backup (MEB)


Goal

Backups should be made regularly.  The easiest way to do so on Linux/UNIX is to write a cron job. In this article MEB users will find two crontab templates: one for weekly full backups and another for daily incremental backups.

Solution

Although MEB allows for the creation of incremental backups, it is still best practice to run full backups periodically.   Below you will find a schedule which makes weekly full backups and daily incremental backups.

Weekly full backup can be done using following crontab command:

Create example data MySQL

1. Create tables 

DROP TABLE IF EXISTS `City`;
CREATE TABLE `City` (
  `ID` int(11) NOT NULL auto_increment,
  `Name` char(35) NOT NULL default '',
  `CountryCode` char(3) NOT NULL default '',
  `District` char(20) NOT NULL default '',
  `Population` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID`)
) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1;

INSERT INTO `City` VALUES (1,'Kabul','AFG','Kabol',1780000);
INSERT INTO `City` VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO `City` VALUES (3,'Herat','AFG','Herat',186800);

2. Shell insert 
#!/bin/bash
max=1000
for (( i=11; i <= $max; ++i ))
do
    /usr/bin/mysql -uroot -p123456 -hA.A.A.B -e "use test; INSERT INTO City VALUES ($i,'Kabul','AFG','Kabol',1780000);"

done

Thứ Hai, 12 tháng 12, 2016

Installing PHP and the Oracle Instant Client

PHP OCI8 is the PHP extension for connecting PHP to Oracle Database. OCI8 is open source and included with PHP. The name is derived from Oracle's C "call interface" API first introduced in version 8 of Oracle Database. PHP OCI8 links with Oracle C client libraries, which are included in the Oracle Database software and also in Oracle Instant Client.
Oracle Instant Client is a free set of easily installed libraries that allow programs to connect to local or remote Oracle Database instances. To use Instant Client an existing database is needed because Instant Client does not include one. Typically the database will be on another machine. If the database is local then Instant Client, although convenient and still usable, is generally not needed because PHP OCI8 can be built using the database libraries.

Install HP ArcSight


[root@Kilo~]# useradd kilo_arc

[root@Kilo~]# vim /etc/security/limits.d/90-nproc.conf


    # Default limit for number of user's processes to prevent

    # accidental fork bombs.

    # See rhbz #432903 for reasoning.

    * soft nproc 1024

    root soft nproc unlimited

    * soft nproc 10240

Chủ Nhật, 11 tháng 12, 2016

Installing PDO_OCI and OCI8 PHP extensions on CentOS

I am currently working on a PHP project that requires using an Oracle server as the database. I
This tutorial assumes that you have already installed php and other packages (e.g. php-pdo) you normally need. This was also tested with an installation of Oracle 11g Express. I’m not sure if this will work for higher versions.

Dependencies

Development packages

$ sudo yum install php-pear php-devel zlib zlib-devel bc libaio glibc
$ sudo yum groupinstall "Development Tools"

InstantClient

Apache Web Server Security and Hardening Tips

1. How to hide Apache Version and OS Identity from Errors

When you install Apache with source or any other package installers like yum, it displays the version of your Apache web server installed on your server with the Operating system name of your server in Errors. It also shows the information about Apache modules installed in your server.
Show Apache Version

Thứ Sáu, 9 tháng 12, 2016

Troubleshoot with Apache Logs

$ cat access.log | cut -d ' ' -f 9 | sort | uniq -c | sort -nr
     941 200
     292 500
     290 404
     50 401
     20 400

$ grep " 404 " access.log | cut -d ' ' -f 7 | sort | uniq -c | sort -nr
     17 /manager/html
      3 /robots.txt
      3 /phpMyAdmin/scripts/setup.php
      3 //myadmin/scripts/setup.php
      3 /favicon.ico

$ cat access.log.1 | cut -d '"' -f 6 | sort | uniq -c | sort -nr

Install PHP & Apache from sourcecode

[root@kiloccnp httpd-2.4.23]# yum install libxml2-static.x86_64 libxml2.x86_64 libxml2-devel.x86_64
[root@kiloccnp php-5.6.28]# yum install gsm.x86_64 gsm-devel.x86_64 gsm-tools.x86_64 libxslt.x86_64
[root@kiloccnp php-5.6.28]# yum install icu.x86_64 libicu.x86_64 libicu-devel.x86_64 libicu-doc.noarch
[root@kiloccnp php-5.6.28]# yum install unixODBC.x86_64 unixODBC-devel.x86_64
[root@kiloccnp curl]# yum install libcurl-devel.x86_64 libcurl.x86_64
[root@kiloccnp gd]# yum install dvipng.x86_64 libpng.x86_64 libpng-devel.x86_64 libpng-static.x86_64 gd.x86_64
http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html  (download for oci8 )
[root@kiloccnp ~]# rpm -Uvh oracle-instantclient11.2-basic-11.2.0.4.0-1.x86_64.rpm
[root@kiloccnp ~]# rpm -Uvh oracle-instantclient11.2-devel-11.2.0.4.0-1.x86_64.rpm

Thứ Hai, 5 tháng 12, 2016

CÔNG NGHỆ LƯU TRỮ CEPH

Ceph là giải pháp mã nguồn mở để xây dựng hạ tầng lưu trữ phân tán, ổn định, độ tin cậy và hiệu năng cao, dễ dàng mở rộng. Với hệ thống lưu trữ được điều khiển bằng phần mềm, Ceph cung cấp giải pháp lưu trữ theo đối tượng (Object), khối (Block) và tệp dữ liệu (File) trong một nền tảng đơn nhất. Ceph chạy trên nền tảng điện toán đám mây với các thiết bị phần cứng ổn định và tiên tiến nhất, giúp tiết kiệm chi phí và sử dụng dễ dàng với Linux Kernel.

1. Lưu trữ Ceph là gì?

How to Configure a Proxmox VE 4 Multiple Node Cluster

Prerequisites
  • 3 Proxmox server
    pve1
        IP          : 192.168.1.114
        FQDN     : pve1.myproxmox.co
        SSH port: 22

    pve2
        IP          : 192.168.1.115
        FQDN     : pve2.myproxmox.co
        SSH port: 22

    pve3
        IP           : 192.168.1.116
        FQDN      : pve3.myproxmox.co
        SSH port : 22
  • 1 CentOS 7 server as NFS storage with IP 192.168.1.101
  • Date and time mus be synchronized on each Proxmox server.

General Iptables Firewall Rules

1. Delete all existing rules
# iptables -F

2. Set default chain policies
# iptables -P INPUT DROP
# iptables -P FORWARD DROP
# iptables -P OUTPUT DROP

3. Block a specific ip-address
BLOCK_THIS_IP=”x.x.x.x”
# iptables -A INPUT -s “$BLOCK_THIS_IP” -j DROP

Apache Optimization: KeepAlive On or Off

Apache is the most widely used web server on the Internet. Knowing how to get the most out of Apache is very important for a systems administrator. Optimizing Apache is always a balancing act. It’s a case of sacrificing one resource in order to obtain savings in another.

What is KeepAlive

HTTP is a session less protocol. A connection is made to transfer a single file and closed once the transfer is complete. This keeps things simple but its not very efficient.
To improve efficiency something called KeepAlive was introduced. With KeepAlive the web browser and the web server agree to reuse the same connection to transfer multiple files.

How to rotate apache logs

/var/log/httpd/access_log.* {
compress
copytruncate
create 644 root root
rotate 30
size 100K
}
/var/log/httpd/dummy-host.example.com-access_log.* {
compress
copytruncate
create 644 root root
rotate 30
size 10M

MySQL Engines: InnoDB vs. MyISAM

The 2 major types of table storage engines for MySQL databases are InnoDB and MyISAM. To summarize the differences of features and performance,

1. InnoDB is newer while MyISAM is older.
2. InnoDB is more complex while MyISAM is simpler.
3. InnoDB is more strict in data integrity while MyISAM is loose.
4. InnoDB implements row-level lock for inserting and updating while MyISAM implements table-level lock.
5. InnoDB has transactions while MyISAM does not.
6. InnoDB has foreign keys and relationship contraints while MyISAM does not.
7. InnoDB has better crash recovery while MyISAM is poor at recovering data integrity at system crashes.
8. MyISAM has full-text search index while InnoDB has not.

How to Reset MySQL Root Password

Method 1. How to Change MySQL Root Password Using mysqladmin Command?
You can change the MySQL root password using mysqladmin command as shown below. Please note that there is no space between -p and currentpassword.
# mysqladmin -u root -pCURRENTPASSWORD password ‘NEWPASSWORD’
Once you’ve changed it make sure you can login with your new password successfully as shown below.
# mysql -u root -pnewpassword
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 8
Server version: 5.5.13-rc-community MySQL Community Server (GPL)
mysql>

How to Shrink or reduce size of LVM partitons in RHEL/CentOS

1. Check disk partitions size
# df -hT
Filesystem    Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 ext3 34G 6.6G 26G 21% /
/dev/mapper/VolGroup00-LogVol03 ext3 34G 29G 27G 52% /usr
/dev/mapper/VolGroup00-LogVol04 ext3 97G 89G 2.7G 98% /var
/dev/sda1 ext3 99M 27M 67M 29% /boot
tmpfs tmpfs 56G 0 56G 0% /dev/shm
/dev/mapper/VolGroup00-LogVol02 ext3 56G 9.1G   65G 13% /backup
2. Unmount the partitions that want to shrink or reduce
# umount /dev/mapper/VolGroup00-LogVol02

Difference between RAID and LVM

 S.No.

 RAID  LVM
 1.  RAID is used for redundancy.  LVM is a way in which you partition the hard disk logically and it contains its own advantages.
 2.  A RAID device is a physical grouping of disk devices in order to create a logical presentation of one device to an Operating System for redundancy or performance or a combination of the two.  LVM is a logical layer that that can be anipulated in order to create and, or expand a logical presentation of a disk device to an Operating System.
 3.  RAID is a way to create a redundant or striped block device with redundancy using other physical block devices.  LVM usually sits on top of RAID blocks or even standard block devices to accomplish the same result as a partitioning, however it is much more flexible than partitions. You can create multiple volumes crossing multiple physical devices, remove physical devices without loosing data, resize the volumes, create snapshots, etc
 4.  RAID is either a software or a hardware technique to create data storage redundancy across multiple block devices based on required RAID levels.  LVM is a software tool to manage large pool of storage devices making them appear as a single manageable pool of storage resource. LVM can be used to manage a large pool of what we call Just-a-bunch-of-Disk (JBOD) presenting them as a single logical volume and thereby create various partitions for software RAID.
 5.  RAID is NOT any kind of Data backup solution. Its a solution to prevent one of the SPOFs (Single Point of Failure) i.e. DISK failure. By configuring RAID you are just providing an emergency substitute for the Primary disk. It NEVER means that you have configured DATA backup.  LVM is a disk management approach that allows us to create, extend, reduce, delete or resize the volume groups or logical volumes.

Managing Disk Space with LVM -04

Adding A Hard Drive And Removing Another One

We haven't used /dev/sdf until now. We will now create the partition /dev/sdf1 (25GB) and add that to our fileserver volume group.
kilo:~#fdisk /dev/sdf
kilo:~# pvcreate /dev/sdf1
  Physical volume "/dev/sdf1" successfully created
Add /dev/sdf1 to our fileserver volume group:
kilo:~#:vgextend fileserver /dev/sdf1
kilo:~#:vgdisplay
That's it. /dev/sdf1 has been added to the fileserver volume group. Now let's remove /dev/sdb1. Before we do this, we must copy all data on it to /dev/sdf1
kilo:~# pvmove /dev/sdb1 /dev/sdf1
  /dev/sdb1: Moved: 1.9%
  . . .
  /dev/sdb1: Moved: 100.0%

Next we remove /dev/sdb1 from the fileserver volume group:
kilo:~# vgreduce fileserver /dev/sdb1
  Removed "/dev/sdb1" from volume group "fileserver"
kilo:~# vgdisplay
kilo:~#  pvremove /dev/sdb1 
kilo:~#  pvdisplay

You could now remove /dev/sdb from the system (if this was a real system and not a virtual machine).

Managing Disk Space with LVM -03

In this chapter we will learn how to resize our logical volume share which has an ext3 filesystem. (I will show how to resize logical volumes with xfs and reiserfs filesystems further down this tutorial.)
First we must unmount it:
kilo:~# umount /var/share
kilo:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              19G  665M   17G   4% /
tmpfs                  78M     0   78M   0% /lib/init/rw
udev                   10M   88K   10M   1% /dev
tmpfs                  78M     0   78M   0% /dev/shm
/dev/sda1             137M   17M  114M  13% /boot
/dev/mapper/fileserver-backup
                      5.0G  144K  5.0G   1% /var/backup
/dev/mapper/fileserver-media
                      1.0G   33M  992M   4% /var/media

Now let's enlarge share from 40GB to 50GB:
kilo:~# lvextend -L+50G /dev/fileserver/share
  Extending logical volume share to 50.00 GB
  Logical volume share successfully resized

Until now we have enlarged only share, but not the ext3 filesystem on share. This is what we do now:
kilo:~# e2fsck -f /dev/fileserver/share
e2fsck 1.40-WIP (14-Nov-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/fileserver/share: 11/5242880 files (9.1% non-contiguous), 209588/10485760 blocks


Make a note of the total amount of blocks (10485760) because we need it when we shrink share later on.
kilo:~# resize2fs /dev/fileserver/share
resize2fs 1.40-WIP (14-Nov-2006)
Resizing the filesystem on /dev/fileserver/share to 13107200 (4k) blocks.
The filesystem on /dev/fileserver/share is now 13107200 blocks long.
Let's mount share:
kilo:~# mount /dev/fileserver/share /var/share
kilo:~# ~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              19G  665M   17G   4% /
tmpfs                  78M     0   78M   0% /lib/init/rw
udev                   10M   88K   10M   1% /dev
tmpfs                  78M     0   78M   0% /dev/shm
/dev/sda1             137M   17M  114M  13% /boot
/dev/mapper/fileserver-backup
                      5.0G  144K  5.0G   1% /var/backup
/dev/mapper/fileserver-media
                      1.0G   33M  992M   4% /var/media
/dev/mapper/fileserver-share
                       50G  180M   47G   1% /var/share


Managing Disk Space with LVM -02

Basically LVM looks like this:



You have one or more physical volumes (/dev/sdb1 - /dev/sde1 in our example), and on these physical volumes you create one or more volume groups (e.g. fileserver), and in each volume group you can create one or more logical volumes. If you use multiple physical volumes, each logical volume can be bigger than one of the underlying physical volumes (but of course the sum of the logical volumes cannot exceed the total space offered by the physical volumes).
It is a good practice to not allocate the full space to logical volumes, but leave some space unused. That way you can enlarge one or more logical volumes later on if you feel the need for it.
In this example we will create a volume group called fileserver, and we will also create the logical volumes /dev/fileserver/share, /dev/fileserver/backup, and /dev/fileserver/media (which will use only half of the space offered by our physical volumes for now - that way we can switch to RAID1 later on (also described in this tutorial)).

Managing Disk Space with LVM -01

The Linux Logical Volume Manager (LVM) is a mechanism for virtualizing disks. It can create "virtual" disk partitions out of one or more physical hard drives, allowing you to grow, shrink, or move those partitions from drive to drive as your needs change. It also allows you to create larger partitions than you could achieve with a single drive.

Traditional uses of LVM have included databases and company file servers, but even home users may want large partitions for music or video collections, or for storing online backups. LVM and RAID 1 can also be convenient ways to gain redundancy without sacrificing flexibility.

This article looks first at a basic file server, then explains some variations on that theme, including adding redundancy with RAID 1 and some things to consider when using LVM for desktop machines.

Chủ Nhật, 4 tháng 12, 2016

Setting up a large (2TB+) hard disk drive on Linux

1. Physically install the hard drive.

After installing the drive, check that the BIOS detected your new drive with the correct size.
When I do this I get an error similar to: "WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted."

2. Identify the device

Also check the device has been detected by linux and identify the device name.
root@turtle:~# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5  /dev/sdb
This shows a hard drive 'sda' with existing partitions sda1, sda2 and sda5. It also shows a hard drive 'sdb', but with no partitions detected. The new hard drive will have no partitions, so we can identify it as device 'sdb'


Bypassing Bad fstab Failure When Booting Linux

If /etc/fstab file is created with errors or the hardware configuration changes such as adding hard disks, Linux will boot into failure state. We can bypass the fstab failure by adding booting parameters to Linux. We can do this in two methods:

Method 1: Boot to single user mode 

When booting into single user mode, Linux will not mount anything. Then we can remount the / to some directory in read/write mode and then edit the /etc/fstab.
1. Boot Linux into single user mode
Press ESC in the grub menu and press ‘e’ to edit the grub entry. Add single to the kernel parameter like this:
kernel /vmlinuz-2.6.32.21-166.fc12.i686 ro root=/dev/mapper/VolGroup-LogVol00 vga=792 single

How To Reset Root Password On CentOS 7

The way to reset the root password on centos7 is totally different to Centos 6. Let me show you how to reset root password in CentOS 7.
1 – In the boot grub menu select option to edit.
Selection_003
2 – Select Option to edit (e).

Reset Your Forgotten Root Password On RHEL 7

Sometimes you forget stuff like meetings, seminars,passwords etc. I do. But forgetting a password to a Servers with no easy way to reset it while locked outRedhat servers is one of such systems. If you forget the root password to your RHEL 7 SERVERS, it’s almost virtually impossible to reset it while you’re locked out.

At the boot menu, press e to edit the existing kernel . Then, go to the kernel line (the line starting with linux16) .

How To Setup MariaDB Galera Cluster

MariaDB is a relational database management system (RDBMS) and  MariaDB Galera Cluster is a synchronous multi-master cluster for MariaDB. It is available on Linux only, and only supports the XtraDB/InnoDB storage engines. This article explains how to setup MariaDB Galera Cluster 10.0 with 2 nodes running on CentOS 6.5 x86_64 resulting in a HA (high-availability) database cluster.

How to Deploy and Configure MaxScale for SQL Load Balancing with Read-Write Split

There are two models of load balancing: transport and application layer. HAProxy is a great TCP load balancer, but it’s lack of SQL awareness effectively limits its ability to address certain scaling issues in distributed database environments. In the open source world, there’s been a few SQL-aware load balancers, namely MySQL Proxy, ProxySQL and MaxScale, but they all seemed to be in beta status and unfit for production use. So we were pretty excited when the MariaDB team released a GA version of MaxScale earlier this year. In this blog, we’ll have a look at MaxScale and see how it compares with HAProxy.

Proxmox VE- One Public IP

Proxmox VE - One Public IP
This guide will show you how to set up Proxmox with only one public IP. We will configure an extra interface bridge and make sure VM traffic is NATed. I have a few dedicated servers, some run Proxmox. Most of them however have only a few IP's. Therefore the VM's in proxmox cannot all have a public IP. For most of them that is not a problem. If needed I run a proxy or set up iptables to forward ports to the VM's.
This guide is tested on a proxmox machine running proxmox version 3.2.

Proxmox VE_part 1

Proxmox VE (virtual environment) is a distribution based on Debian Etch (x86_64); it provides an OpenSource virtualization platform for running virtual machines (OpenVZ and KVM) and comes with a powerful, web-based control panel (it includes a web-based graphical console that you can use to connect to the virtual machines). With Proxmox VE, you can even create a cluster of virtualization hosts and create/control virtual machines on remote hosts from the control panel. Proxmox VE also supports live migration of virtual machines from one host to the other. This guide shows how you can use Proxmox VE to control KVM and OpenVZ virtual machines and how to create a small computing cloud with it

Thứ Tư, 16 tháng 11, 2016

Creating a Local Yum Repository Using an ISO Image

Sometime , our server internal can not access internet . we able to  create respository local for use yum command .

1. Transfer the removable storage to the system on which you want to create a local yum repository, and copy the DVD image to a directory in a local file system.

2 . Create a suitable mount point, for example /mnt/ISO

3. use the “mount” command to mount an iso
[root@kilo ~]# mount -t iso9660 -o loop V77197-01-OracleLinux-6U7.iso /mnt/ISO/   

4 . In the /etc/yum.repos.d directory, edit the existing repository files

[OL67]
name=Oracle Linux
baseurl=file:///mnt/ISO
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=1


5. Clean up the yum cache
 yum clean all
 
6. Test that you can use yum to access the repository. 
 
yum repolist 



Thứ Hai, 14 tháng 11, 2016

Chặn người dùng truy cập website với Squid Proxy

Thêm 2 dòng dưới đây vào file squid.conf:

Sau khi lưu lại file squid.conf, khởi động lại dịch vụ squid:

How to start / stop / restart / reload iptables on CentOS 7 / RHEL 7

Step 1 : Install iptables-services

yum install iptables-services

Step 2 : Manage iptables with systemctl

Use the below given syntax
systemctl [stop|start|restart|reload] iptables
Example.
To start iptables
systemctl start iptables
 
To stop iptables
systemctl stop iptables

Thứ Sáu, 11 tháng 11, 2016

Một số lệnh cơ bản kiểm tra server khi bị DDoS

Tấn công từ chối dịch vụ phân tán (DDoS – Distributed Denial Of Service) là kiểu tấn công làm cho hệ thống máy tính hay hệ thống mạng quá tải, không thể cung cấp dịch vụ hoặc phải dừng hoạt động. Trong các cuộc tấn công DDoS, máy chủ dịch vụ sẽ bị “ngập” bởi hàng loạt các lệnh truy cập từ lượng kết nối khổng lồ.
Khi số lệnh truy cập quá lớn, máy chủ sẽ quá tải và không còn khả năng xử lý các yêu cầu. Hậu quả là người dùng không thể truy cập vào các dịch vụ trên các trang web bị tấn công DDoS.
Mình xin chia sẻ lại bài của BKNS, giới thiệu một số lệnh cơ bản để kiểm tra server trong trường hợp này.

Cấu hình Nginx redirect

Redirect non-WWW sang WWW

Sửa file cấu hình của nginx (nginx.conf) thì bạn hãy sửa file cấu hình cho từng domain trong folder /etc/nginx/conf.d/

Single domain

server {
        server_name example.com;
        return 301 $scheme://www.example.com$request_uri;
}

All domains

Bảo vệ thư mục trong Nginx

Khi sử dụng Apache, thông thường để bảo vệ thư mục chúng ta thường sử dụng file .htaccess và .htpasswd. Tuy nhiên, Nginx lại không hỗ trợ .htaccess. Các bạn hãy xem hướng dẫn Basic HTTP Authentication bên dưới để có thể thực hiện bảo vệ thư mục trong Nginx.
Mục tiêu
Bảo vệ thư mục http://example.com/test/ với đường dẫn server là /home/example.com/public_html/test/, file cấu hình Nginx /etc/nginx/conf.d/example.com.conf

1. Tạo file Password

Tuning NGINX Configuration

The following are some NGINX directives that can impact performance. As stated above, we only discuss directives that are safe for you to adjust on your own. We recommend that you not change the settings of other directives without direction from the NGINX team.

Worker Processes

NGINX can run multiple worker processes, each capable of processing a large number of simultaneous connections. You can control the number of worker processes and how they handle connections with the following directives:

Creating NGINX Rewrite Rules

Comparing the return, rewrite, and try_files Directives

The two directives for general‑purpose NGINX rewrite are return and rewrite, and the try_files directive is a handy way to direct requests to application servers. Let’s review what the directives do and how they differ.

The return Directive

The return directive is the simpler of the two general‑purpose directives and for that reason we recommend using it instead of rewrite when possible (more later about the why and when). You enclose the return in a server or location context that specifies the URLs to be rewritten, and it defines the corrected (rewritten) URL for the client to use in future requests for the resource.
Here’s a very simple example that redirects clients to a new domain name:
server {
    listen 80;
    listen 443 ssl;
    server_name www.old-name.com;
    return 301 $scheme://www.new-name.com$request_uri;
}

NGINX Tuning 1

Backup your original configs and you can start reconfigure your configs. You will need to open your nginx.conf at /etc/nginx/nginx.conf with your favorite editor.


# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

# only log critical errors
error_log /var/log/nginx/error.log crit;

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

Auto login root user at system start in Kali linux

And here are the simple steps to do it. Open and edit the file called /etc/gdm3/daemon.conf.
root@kali:~# leafpad /etc/gdm3/daemon.conf
In the daemon section uncomment the 2 lines for automatic login. It should finally look like this
[daemon]
# Enabling automatic login
  AutomaticLoginEnable = true
  AutomaticLogin = root
Done. Now reboot and enjoy.

Linux ss command to monitor network connections

ss - socket statistics

In a previous tutorial we saw how to use the netstat command to get statistics on network/socket connections. However the netstat command has long been deprecated and replaced by the ss command from the iproute suite of tools.
The ss command is capable of showing more information than the netstat and is faster. The netstat command reads various /proc files to gather information. However this approach falls weak when there are lots of connections to display. This makes it slower.
The ss command gets its information directly from kernel space. The options used with the ss commands are very similar to netstat making it an easy replacement.

Linux netstat command

1. List out all connections

The first and most simple command is to list out all the current connections. Simply run the netstat command with the a option.
$ netstat -a

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 enlightened:domain      *:*                     LISTEN     
tcp        0      0 localhost:ipp           *:*                     LISTEN     
tcp        0      0 enlightened.local:54750 li240-5.members.li:http ESTABLISHED
tcp        0      0 enlightened.local:49980 del01s07-in-f14.1:https ESTABLISHED
tcp6       0      0 ip6-localhost:ipp       [::]:*                  LISTEN     
udp        0      0 enlightened:domain      *:*                                
udp        0      0 *:bootpc                *:*                                
udp        0      0 enlightened.local:ntp   *:*                                
udp        0      0 localhost:ntp           *:*     

TOP command examples on Linux to monitor processes

Linux TOP command

One of the most basic command to monitor processes on Linux is the top command. As the name suggests, it shows the top processes based on certain criterias like cpu usage or memory usage.
The processes are listed out in a list with multiple columns for details like process name, pid, user, cpu usage, memory usage.
Apart from the list of processes, the top command also shows brief stats about average system load, cpu usage and ram usage on the top.
This post shows you some very simple examples of how to use the top command to monitor processes on your linux machine or server.

Note your "top" command variant

18 commands to monitor network bandwidth on Linux server

Network monitoring on Linux

This post mentions some linux command line tools that can be used to monitor the network usage. These tools monitor the traffic flowing through network interfaces and measure the speed at which data is currently being transferred. Incoming and outgoing traffic is shown separately.
Some of the commands, show the bandwidth used by individual processes. This makes it easy to detect a process that is overusing network bandwidth.
The tools have different mechanisms of generating the traffic report. Some of the tools like nload read the "/proc/net/dev" file to get traffic stats, whereas some tools use the pcap library to capture all packets and then calculate the total size to estimate the traffic load.
Here is a list of the commands, sorted by their features.
1. Overall bandwidth - nload, bmon, slurm, bwm-ng, cbm, speedometer, netload

2. Overall bandwidth (batch style output) - vnstat, ifstat, dstat, collectl

2. Bandwidth per socket connection - iftop, iptraf, tcptrack, pktstat, netwatch, trafshow

3. Bandwidth per process - nethogs

Lynis - Security auditing tool

Lynis is a security auditing for UNIX derivatives like Linux, macOS, BSD, and others. It performs an in-depth security scan and runs on the system itself. The primary goal is to test security defenses and provide tips for further system hardening. It will also scan for general system information, vulnerable software packages, and possible configuration issues. Lynis was commonly used by people in the "blue team" to assess the security defenses of their systems. Nowadays, penetration testers also have Lynis in their toolkit.


. . . .

https://github.com/CISOfy/lynis


What is Lynis

How to protect from port scanning and smurf attack in Linux Server by iptables


In this post I will share the iptable script in which we will learn How to protect from port scanning and smurf attack in Linux Server.
Features Of Script :
(1) When a attacker try to port scan your server, first because of iptable attacker will not get any information which port is open. Second the Attacking IP address will be blacklisted for 24 Hour (You can change it in script) . Third , after that attacker will not able to open access anything for eg. even attacker will not see any website running on server via web browser, not able to ssh,telnet also. Means completely restricted.
(2) Protects from smurf attack

IPtables DDoS Protection: The Best Rules to Mitigate DDoS Attacks

There are different ways of building your own anti-DDoS rules for iptables. We will be discussing the most effective iptables DDoS protection methods in this comprehensive tutorial.

This guide will teach you how to:

  1. Select the best iptables table and chain to stop DDoS attacks
  2. Tweak your kernel settings to mitigate the effects of DDoS attacks
  3. Use iptables to block most TCP-based DDoS attacks
  4. Use iptables SYNPROXY to block SYN floods

IPTables Configuration for DDoS Protection

The following IPTables configuration will assist with traffic that the DDoS filters cannot fully mitigate.
Note: These are a generic ruleset and should be expanded further to suit your specific application.
### IP Tables DDOS Protection Rules ###

### 1: Drop invalid packets ###
/sbin/iptables -t mangle -A PREROUTING -m conntrack --ctstate INVALID -j DROP

### 2: Drop TCP packets that are new and are not SYN ###
/sbin/iptables -t mangle -A PREROUTING -p tcp ! --syn -m conntrack --ctstate NEW -j DROP

Linux Iptables To Block Different Attacks


Iptables is a Linux kernel based packet filter firewall. The iptables modules are present in the kernel itself, there is no separate daemon for it. This makes the firewall very fast and effective. The Iptables rules control the incoming and outgoing traffic on a network device. In this article, we will discuss about some of the common network attacks, and how we can block them using iptables. Some of the common network attacks are SYN flood attack, smurf attack, land attack, attacks by malfunctioning ICMP packet, and some other forms of DOS attack. Before going into the details of these attacks, let’s have an overview of iptables, and how to use this command.

Anti DDoS with iptables and ipt_recent


In these days I’ve been attacked with a syn flood plus a GET flood requests.
There was ~1600 different IP that compose the botnet that was attacking, so I write some lines of iptables in order to keep the attack under control.
Below you can find the entire micro script I’ve made, and after that an explanation line per line about what they do.
Clear all existent rules on the firewall.
iptables -F
iptables -X

Các vấn đề liên quan tới session của Nginx

Nginx một web services khá mạnh cho hiện tượng C10k . Có rất nhiều cấu hình của nó mà mình đã xem qua . Đa phần sysadmin không tìm hiểu kỹ về nó , đưa ra những tham số rất nguy hiểm cho hệ thống . Và vấn đề giải quyết các session như thế nào luôn làm các sysadmin lúng túng ,đôi khi phó mặc cho Dev phía dưới giải quyết vẫn đề này .

 Trước khi đi vào chi tiết cách giải quyết vấn đề này , đảo qua một chút về các thuật toán liên quan tới cân bằng tài của Nginx .

  • round-robin —Khi Client gửi kết nối tới server , Nginx sẽ tuần tự gửi gói tin tới những backend phía sau . 
  • least-connected — request tiếp theo của client sẽ đc gán tới backend có ít kết nối nhất .
  • ip-hash — Đây là một "hash-funcsion" (chức năng băm ) . Được sử dụng để xác định máy chủ phục vụ dựa trên địa chỉ client .
Mặc định của Nginx sử dụng thuật toán round-robin với cấu hình :

http {
    upstream myapp1 {
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://myapp1;
        }
    }
}
Khi muốn sử dụng thuật toán least-connected chỉ cần khai báo

upstream myapp1 {
        least_conn;
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }
Với 2 thuật toán này Nginx hoàn toàn koo giữ đc các session giữa client và server backend , điều này phụ thuộc vào phía ứng dụng keep các phiên làm việc tới client có thể dự trên memcache hoặc redit . . .

Thuật toán ip_hash hay còn gọi là " Session persistence " hay "Session Sticky " (duy trì phiên ) sẽ giúp việc giữ các kết nối từ 1 client tới 1 backend trong upstream nhất định . Với ip-hash, địa chỉ IP của client sẽ được sử dụng như một hashing key để xác định máy chủ dịch vụ . Phương pháp này đảm bảo rằng ,những request từ một IP sẽ được 1 máy chủ phục vụ .
upstream myapp1 {
    ip_hash;
    server srv1.example.com;
    server srv2.example.com;
    server srv3.example.com;
}

Nhưng Session persistence có tới 3 phương pháp để lưu trữ các session

1. Session Persistence Method: Cookie Insertion

upstream backend {
    server webserver1;
    server webserver2;

    sticky cookie srv_id expires=1h domain=.example.com path=/;
}

2. Session Persistence Method: Learn

upstream backend {
   server webserver1;
   server webserver2;

   sticky learn create=$upstream_cookie_sessionid
       lookup=$cookie_sessionid
       zone=client_sessions:1m
       timeout=1h;
}

3. Session Persistence Method: Sticky Routes

upstream backend {
   server webserver1 route=a;
   server webserver2 route=b;

   # $var1 and $var2 are run-time variables, calculated for each request
   sticky route $var1 $var2;
}