Change Ganeti's Network Configuration

Ganeti is a cluster virtual server management software tool built on top of existing virtualization technologies such as Xen or KVM and other open source software. ”

This is how I changed the secondary network configuration using Ganeti command line tools.

1, First, say I need to change the network from to, I first will remove all NICs from all instances. The following command will remove the last NIC from the instance:

gnt-instance modify --net remove <instance>

2, After all NICs have been removed from all instances, the network can be disconnected:

gnt-network disconnect <network>

3, Next is to remove the network CIDR from the cluster:

 gnt-network remove <network>

4, Re-add the network with the new CIDR:

gnt-network add --network= <network>

5, Re-connect the network to the cluster:

gnt-network connect<network> bridged <bridge>

6, Re-add the NIC from the new network to every instance:

 gnt-instance modify --net add:network=<network>,ip=pool <instance>

7, The new NIC won’t be effected until the instance is rebooted by Ganeti:

 gnt-instance reboot <instance>

8, Not sure if there’s a way to pickup the IP automatically, at least I can assign the IP manually by editing /etc/ network/ interfaces with the new IP.

9, Execute `ifup` to bring up the NIC. That’s it!  🙂

笔记: Xen VM 里面的 MySQL 服务器优化

我一直都对公司 Xen VM 的数据库服务器不满, 因为实在是太慢了. 但是几百个 GB 的商业数据我可不敢动, 于是先在测试服务器上证实一下我的想法. 测试环境是:

  • Dom0: Debian 6 Xen Hypervisor 64-bit, Xen 4.0
  • DomU: Debian 6  64-bit
  • MySQL server 5.1, innodb_file_per_table, pool=1GB, log=256MB
  • 硬盘就是普通的 SATA 7200RPM, VM 用的是 LVM 分区

然后我用之前写的一个小程序做批量更新, 32K 记录. 缺省配置下, 运行时长达到24分钟, 而优化后则只需要27秒. 差不多60倍?? 我都有点不敢相信了. 下面是对应的配置和测试数据. 每次更改配置后都会重启 MySQL, 因此不大可能是缓存在起作用.

Updating 32606 records (client table), InnoDB table, autocommit=true, file_per_table, pool=1GB, log=256MB

real 24m46.195s
TPS 21.9

(Xen)With innodb_flush_method=O_DIRECT
real 24m45.024s
TPS 21.95

(Xen)With innodb_flush_method=O_DIRECT, innodb_flush_log_at_trx_commit=0
real 0m37.873s
TPS 860.9

(Xen)With innodb_flush_method=O_DSYNC, innodb_flush_log_at_trx_commit=0
real 0m27.352s
TPS 1192

看来 innodb_flush_log_at_trx_commit 是关键, 按照文档设置为 0 的话每秒 flush 一次, 而不是每个 transaction.

通过实验, 我的判断基本被验证, 看来 Xen 对于磁盘 IO 的额外开销还真不小. 有机会还是把数据库直接跑在真刀真枪的物理层吧.


To Duplicate/Backup a Xen VM in a Logical Volume

技术笔记. 请忽略 🙂

0, If you are to duplicate, create VMs on destination server, just to create conf files and logical volumes for VMs and hold the place for source VMs.

#xen-create-image –hostname [HOSTNAME] –ip [IP] –vcpus 2 –pygrub –dist squeeze

1, Create an LVM snapshot for the VM’s logical volume.


2, Shutdown the VM if optional. Login the VM and shut down or:

#xm shutdown [VM]

3, Copy the snapshot over to destination server. Login to the source server and do

#dd bs=4M if=/dev/[VOLUMEGROUP]/[SNAPSHOTNAME] | gzip -1 – | ssh [USER]@[DESTINATION] dd bs=4M of=[/PATH]/tmp.gz

3.1, If step 3 is successful, you can remove the snapshot optionally.


4, Restore the volume image to the target logical volume on destination server. Better make the target volume bigger than the source volume.

#dd bs=4M if=[/PATH]/tmp.gz | gunzip -1 – | dd bs=4M of=/dev/[TARGETVOLUMEGROUP]/[TARGETLOGICALVOLUME]

4.1, If the target volume is bigger than the source volume(which is recommended), check and resize:


5, Mount target volume for modifications such as IP, etc. And un-mount after this.


6, Compare both source and destination VM configuration files if there’s different booting methods.

7, Boot up the cloned VMs and test.

#xm create [HOSTNAME].cfg
#xm top


Xen 4.0 Hypervisor with LVM on Debian 6 Squeeze

这篇用不用中文都是差不多了, 索性用英文了. Xen 是VM(虚拟机)的一种, 现在很多商用 VPS 服务商就是使用基于 Xen, 或者 OpenVZ 技术在一台物理上的服务器上同时运行多个VM的.

1, Install Debian 6 Squeeze

This step will be the easiest. Just to remember to use LVM when partitioning and leave enough unused(un-partitioned) disk space for later use. If your system has >= 4GB of memory, choose AMD64 architecture.

Reference to LVM:

2, Install Xen

Guess all commands below require root privilege, so don’t bother using “sudo” but log in with root or “su”.

 apt-get install xen-linux-system xen-qemu-dm xen-tools

“xen-linux-system” is not a actual package, it will match something like “xen-linux-system-2.6.32-5-xen-686”, which is a meta package.

3, Modify grub to boot Xen by default

Open and edit “/etc/default/grub”, change




Number 4 means the 5th item in grub boot menu.

To prevent further change to the boot menu, also add the following to “/etc/default/grub”:


And then update grub with



4, Build your own Debian network bridge. First, take down your eth0:

ifdown eth0

Open and edit “/etc/network/interfaces”, change TO

auto lo
iface lo inet loopback

auto br0
iface br0 inet static
address x.x.x.x
netmask x.x.x.x
network x.x.x.x
broadcast x.x.x.x
gateway x.x.x.x
bridge_ports eth0
bridge_stp on
bridge_maxwait 0

Then “br0” will be your new network bridge, and eth0 is your physical Ethernet interface.You can now try bring your bridge online:

ifup br0

Now load some settings to “/etc/sysctl.conf”

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

To make these effective, run the command:

sysctl -p /etc/sysctl.conf


5, Default settings for new VM images, to change, open and edit “/etc/xen-tools/xen-tools.conf” and revise the following lines:

size = 8GB #disk size
memory = 512MB
swap = 512MB

gateway = x.x.x.x
netmask = x.x.x.x

passwd = 1

pygrub = 1

6, Create VM images with xen-create-image

xen-create-image –hostname HOSTNAME –ip IP –vcpus 2 –pygrub –dist squeeze

Note, by default, debootstrap method, an Internet connection is required to download from Debian mirrors. After success, 2 new logical volume2(LV 🙂 ) will be created along with a configuration file called /etc/xen/HOSTNAME.cfg

7, Run VM images

xm create /etc/xen/HOSTNAME.cfg

8, Make VM images auto-start when system boots

mkdir /etc/xen/auto
ln -s /etc/xen/HOSTNAME.cfg /etc/xen/auto


At this point, you should be able to log in to VM via 2 ways:

xm console HOSTNAME


ssh [email protected]