Chủ Nhật, 4 tháng 12, 2016

Proxmox VE_part 1

Proxmox VE (virtual environment) is a distribution based on Debian Etch (x86_64); it provides an OpenSource virtualization platform for running virtual machines (OpenVZ and KVM) and comes with a powerful, web-based control panel (it includes a web-based graphical console that you can use to connect to the virtual machines). With Proxmox VE, you can even create a cluster of virtualization hosts and create/control virtual machines on remote hosts from the control panel. Proxmox VE also supports live migration of virtual machines from one host to the other. This guide shows how you can use Proxmox VE to control KVM and OpenVZ virtual machines and how to create a small computing cloud with it

Proxmox VE is a platform to run virtual machines and containers  For maximum flexibility, we implemented two virtualization technologies - Kernel-based Virtual Machine (KVM) and container-based virtualization (LXC). One main design goal was to make administration as easy as possible. You can use Proxmox VE on a single node, or assemble a cluster of many nodes. All management task can be done using our web-based management interface, and even a novice user can setup and install Proxmox VE within minutes.

The source code of Proxmox VE is released under the GNU Affero General Public License, version 3. This means that you are free to inspect the source code at any time or contribute to the project yourself . The project started in 2007, followed by a first stable version in 2008. By that time we used OpenVZ for containers, and KVM for virtual machines. The clustering features were limited, and the user interface was simple

Minimum requirements, for evaluation
  • CPU: 64bit (Intel EMT64 or AMD64)
  • RAM: 1 GB RAM
  • Hard drive
  • One NIC
Recommended system requirements
  • CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended
  • RAM: 8 GB is good, more is better
  • Hardware RAID with batteries protected write cache (BBU) or flash based protection
  • Fast hard drives, best results with 15k rpm SAS, Raid10
  • At least two NIC ́s, depending on the used storage technology you need more

Proxmox VE is an x86_64 distribution, so you cannot install it on an i386 system. Also, if you want to use KVM, your CPU must support hardware virtualization (Intel VT or AMD-V) - this is not needed if you ust want to use OpenVZ.  In this tutorial I will create a small cluster of two machines, the Proxmox master (kilo.com.vn  with the IP 10.8.1.17) and a slave (kilo-01.com.vn , IP: 10.8.1.18)
  1. Download the latest Proxmox VE ISO image from http://pve.proxmox.com/wiki/Downloads, burn it onto a CD, and boot your system from it.
  2. Install use APT as package management too .
    1. /etc/apt/sources.list
      1. deb https://enterprise.proxmox.com/debian jessie pve-enterprise
        1. apt-get update
        2. apt-get install proxmox-ve

  1. The installer will automatically partition your hard drive using LVM - that's why there is no partition dialogue in the installer. Proxmox uses LVM because that allows to create snapshot backups of virtual machines.
  2. After server has rebooted, you can open a browser and go to https://10.8.1.17:8006 , https://10.8.1.18:8006
You can create a cluster or computing cloud by adding one or multiple slave servers to the Proxmox master (kilo.com.vn). Such a cloud allows you to create and manage virtual machines on remote hosts from the Proxmox control panel, and you can even do live migration of virtual machines from one host to another. I will now show you how to add a second host, kilo.com, and create a cluster. Install Proxmox kilo-01.example.com, just as you did on server1. When you come to the networking section, fill in kilo-01.com and make sure you use a different IP , Requirements
• All nodes must be in the same network as corosync uses IP Multicast to communicate between nodes(also see Corosync Cluster Engine). Corosync uses UDP ports 5404 and 5405 for cluster communication.
  • Date and time have to be synchronized.
  • SSH tunnel on TCP port 22 between nodes is used.
  • Allnodes should have the same version
  • dedicated NIC for the cluster traffic, especially if you use shared storage
  1. Create the Cluster
root@kilo:~# pvecm create cluster1
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
root@kilo:~# pvecm status
Quorum information
------------------
Date:             Tue Nov 29 11:44:02 2016
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/4
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1
Flags:            Quorate

Membership information
----------------------
   Nodeid      Votes Name
0x00000001          1 10.8.1.17 (local)
root@kilo:~#


  1. Adding Nodes to the Cluster
root@kilo-02:~# pvecm add 10.8.1.17
The authenticity of host '10.8.1.17 (10.8.1.17)' can't be established.
ECDSA key fingerprint is 2c:de:76:34:7c:04:1c:7d:c3:a7:a1:4c:54:75:f4:76.
Are you sure you want to continue connecting (yes/no)? yes
root@10.8.1.17's password:
copy corosync auth key
stopping pve-cluster service
backup old database
waiting for quorum...OK
generating node certificates
merge known_hosts file
restart services
successfully added node 'kilo-02' to cluster.
root@kilo-02:~#

root@kilo-02:~# pvecm status
Quorum information
------------------
Date:             Tue Nov 29 12:15:46 2016
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000002
Ring ID:          1/8
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
   Nodeid      Votes Name
0x00000001          1 10.8.1.17
0x00000002          1 10.8.1.18 (local)
root@kilo-02:~#
  1. List node in cluster
root@kilo:~# pvecm nodes

Membership information
----------------------
   Nodeid      Votes Name
        1          1 kilo (local)
        2          1 kilo-02
root@kilo:~#


root@kilo-02:~# pvecm nodes

Membership information
----------------------
   Nodeid      Votes Name
        1          1 kilo
        2          1 kilo-02 (local)
root@kilo-02:~#
  1. restart pve services (or reboot)
# systemctl start pve-cluster
# systemctl restart pvedaemon
# systemctl restart pveproxy
# systemctl restart pvestatd

  1. You can see a lot of information of hosts
  1. Click Datacenter add new storage and choose your type you want to add , now I use nfs for storage (NFS_DATA)
Add two node use NFS . We will upload fie iso to thi storage , click “Upload “
  1. Click “Create VM “ for create new vms
  1. Click “Start “
  1. When vms start , you have problem :
Kindly edit option KVM Virtualization  :
  1. You should then see the link Open VNC console
Browser-based console opens from where you can control the virtual machine (this is especially useful for desktop machines; if the virtual machine is a server, you can as well connect to it using SSH (e.g. with PuTTY)). At moment you able to use vms as a server
  1. If you use file on each server for locate file vms , you can not migrate online , I have Virtual Machine 100 ('vms-kilo-01' ) on node 'kilo locate datafile on server kilo.com.vn


You have to migrate offline , it mean your vms are stopped .
PVE will create new Logical Volume and use rsync for transfer data :

root@kilo-02:~# lvdisplay
 --- Logical volume ---
 LV Path                /dev/pve/vm-100-disk-1
 LV Name                vm-100-disk-1
 VG Name                pve
 LV UUID                i2PFCS-1YV3-s2gf-RM9y-Bm8G-HkOe-9wwRf4
 LV Write Access        read/write
 LV Creation host, time kilo-02, 2016-11-30 11:24:28 +0700
 LV Pool name           data
 LV Status              available
 # open                 1
 LV Size                8.00 GiB
 Mapped size            9.34%
 Current LE             2048
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           251:6

root@kilo-02:~#
  1. If you use share storage you can manual migrate online , vms will not stop
  1. I will write about HA, cluser , auto migrate, networking ,firewall . . . later . To be continue . . . and follow my blog : http://kilo-tech.blogspot.com/  


Không có nhận xét nào:

Đăng nhận xét