Friday, February 27, 2009

VCB basic usage - VM full backup with Samba

In the previous article about VMware VCB, I wrote about full backups to NFS shares. For completeness, I decided to write another one dedicated to backups to Samba or Windows shares.

The idea of backup is the same. Let's have a Samba server available at IP address The exported directory for backups is backup-smb and the user which has write access to this share is backup.

Before we will be able to continue we need to allow smbclient to access Samba server. You can perform it from VI client or directly from ESX service console via esxcfg-firewall command. First, let's check if smbclient is allowed:
esxcfg-firewall -q smbClient
The output of command should be by default:
Service smbClient is blocked.
To reconfigure ESX firewall to allow smbclient access use the next command:
esxcfg-firewall -e smbClient
Now, you should be able to browse the server (the command asks for user's password first):
 smbclient  -L -U backup
The example command output follows (Samba server on SLES10):
Domain=[NAS] OS=[Unix] Server=[Samba 3.0.28-0.2-1625-SUSE-CODE10]
Sharename       Type      Comment
--------- ---- -------
profiles Disk Network Profiles Service
backup-smb Disk
IPC$ IPC IPC Service (Samba 3.0.28-0.2-1625-SUSE-CODE10)
Domain=[NAS] OS=[Unix] Server=[Samba 3.0.28-0.2-1625-SUSE-CODE10]
Now, we are ready to create a simple backup script:


[ -d $MOUNT_DIR ] || mkdir -p "$MOUNT_DIR" || exit 1

VM_BACKUP="`vcbVmName -s any: | grep name: | cut -d':' -f2`"

if [ ! -z "$VM_BACKUP" ]; then
-o username=${BACKUP_USER},password=$BACKUP_PASS || exit 1

for VM in $VM_BACKUP; do
vcbMounter -a name:$VM -r $MOUNT_DIR/$VM

umount $MOUNT_DIR

exit 0
It is simple, isn't it? The code is almost the same as for backups over NFS. We added variables defining our Samba user and his password. The mount command was exchanged with smbmount which is CLI Samba client. If you insist on using the mount command replace the line mounting the backup-smb share with line:
mount -t smbfs //${BACKUP_SERVER}/$SMB_SHARE $MOUNT_DIR \
-o username=${BACKUP_USER},password=$BACKUP_PASS || exit 1
That's all. In such simple backup scenarios I prefer NFS usage because it is simple to set and provides higher throughput than SMB protocol. On the other hand, SMB protocol provides basic authentication mechanism (if you don't disable it).

Monday, February 23, 2009

XenServer is free

It's unbelievable! Citrix decided to release their XEN based hypervisor and complete virtualization solution named XenServer for free a few hours ago (official announcement is here). The product was available in three editions until recently - Express, Standard, Enterprise and Platinum. The differences are outlined in the following table:

The Express edition was free of charge so far but it was missing some fundamental enterprise features like resource pools, live migration or central management console XenCenter. These features are paid. Or better, they were paid.

From now, we have only one edition of XenServer including features of enterprise edition. Everything is free and you can download it. Cool! You don't have to spend any money on virtual machines live migration, resource pools or central management stuff. What happens if we compare it with VMware ESXi? In my opinion, it seems the king might be dead. And the new king might be coming.

What do you think of it? What will be the answer from VMware? I think it is smart way how to show us that XEN based hypervisors are enterprise ready and how to spread it more. In connection with current economical situation they have the real chance to success.

Let me have final question. Who will need Microsoft Hyper-V now? If XenServer is free and because it is more mature and robust than Hyper-V what will be its new position? Today, the winner is Citrix. Tomorrow, the oponents might surprise us. But don't miss the opportunity today. Download XenServer and spread it!

Wednesday, February 18, 2009

VMware vCenter Converter 4.0 was released

The previous version of Converter was at 3.0.3 for a long time. The new standalone version is much similar to the one included in Virtual Infrastructure 3.5 (VI 3.5).

Before it, there were available two editions - Starter and Enterprise where the second one is part of VI 3.5. Here are the additional features of Enterprise edition compared to Starter:
  • it supports multiple migration jobs
  • it supports cold migration
  • it is part of VI3.5 only (particularly VirtualCenter server)
What new brings us latest revision? It is free of charge, it has larger set of supported operating systems as source or it allows you to select the target virtual disks. Newly, it can migrate sources with RedHat, SUSE or UBUNTU Linux. Furthermore, it is able to power off the source after migration finishes. The more comprehensive comparison of the version 4.0 and the version included in VI3.5 is presented by this picture.

Monday, February 9, 2009

Aligning VMFS partition

Proper alignment of filesystem on disk partition may bring some I/O performance improvements. Typically, the reason for it is caused by creating RAID device underneath the accessed disk which can stripe data in chunks of some defined size. The typical size of chunk is 64KB. As you know, no partition is placed at the raw beginning of disk because there needs to be written some metadata like MBR or partition table. It is clear now that default aligning may results in latency increase and so in lower throughput.

The same holds for VMFS filesytem, for both versions 2 and 3. The general rule is to align VMFS partition on the 64KB boundary. The problem is default partition alignment by VMware ESX installer (or Red Hat Anaconda). It doesn't count with it and it layouts the disk partitions one by one. If you create VMFS filesystem from VirtualCenter client, it starts from 64KB. Follows output of fdisk -lu command from testing system:
Disk /dev/sda: 146.6 GB, 146685296640 bytes
255 heads, 63 sectors/track, 17833 cylinders, total 286494720 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 63 208844 104391 83 Linux
/dev/sda2 208845 10442249 5116702+ 83 Linux
/dev/sda3 10442250 281105369 135331560 fb Unknown
/dev/sda4 281105370 286487144 2690887+ f Win95 Ext'd (LBA)
/dev/sda5 281105433 282213854 554211 82 Linux swap
/dev/sda6 282213918 286294364 2040223+ 83 Linux
/dev/sda7 286294428 286487144 96358+ fc Unknown

Disk /dev/sdb: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders, total 251658240 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 128 251658224 125829048+ fb Unknown
The first disk /dev/sda is internal one and it was partitioned by ESX installer. The VMFS partition has ID fb. The second disk was initialized from VirtualCenter. It belongs to an external disk array. The starting sector is 128, so it is aligned to 128 x 512B (sector size) = 64KB. The first VMFS partition on /dev/sda is not aligned because 10442250 divided by 128 doesn't give an integer.

There is no destructive way how to realign not optimally aligned VMFS partitions. You need to recreate the partitions from scratch. It requires to backup ESX system, VMFS filesystems, realign the partitions and restore the backup.

There is not defined that every disk or disk array has the alignment boundary at 64KB. It is required to discuss it with system guides. But 64KB is good starting point and it is the most common value. The question is if it is worthwile to perform it because average performance benefit is around 10%.

I drew on more comprehensive guide about the topic published at It contains details about test environment, guest filesystem alignment or steps how to layout partitions with fdisk so read it if you are interested in.

Tuesday, February 3, 2009

Sun xVM Server postponed

While management system Ops Center 2.0 was released recently, it seems Sun has some issues with their XEN based hypervisor. According to this article published by french magazine LeMagIT it is going to be released in second quarter 2009.

Licensing open source

I was considering to write this article a while because it doesn't fit in any type of article I have published before. And it isn't my primary business to discuss various open source licensing here. The thing is, it is useful to understand the role of them but it is often quite difficult to imagine what they just want to say. Sometimes, I have a feeling you need a lawyer education to understand them.

You know the obvious questions like "why it has to be GPLed?", "why this license is not compatible with that one?" or "why it can't be part of Linux kernel?". You know the open source license ensures the availability of source code which you can modify and redistribute. The true pitfalls begin appearing when you would like to integrate two products available under two different licenses. To make things clearer I borrowed these two comprehensive schemes from chandanlog at Sun blogs. The first one presents general attitude of open source licenses and classical EULA to source code. The second one explains differences of open source licenses. They are quite minor but may have out of sight consequences.
Let's try to apply the licensing rules to the problem of releasing ZFS filesystem with Linux kernel. What's the problem? First, Sun owns some patent rights which prohibit such action. Second, as Linux kernel is GPLized anything included has to be GPLized as well. ZFS is covered with CDDL license which requires to be preserved. From here, I see the main reason of incompatibility. But if I realize there are other binary only modules like video drivers from ATI or NVIDIA which are linked with kernel via some sort of GPLized open source wrapper why we can't do it the same way with ZFS?!? The question is if it is legal.

The two practical schemes makes me to understand the topic more deeply. The example with ZFS made the situation complicated and I need to find out something what shows me that it is not. I hope you will find these graphical explanations as useful as mine. And check the chandanlog who created them!