Monday, October 12, 2009

Second edition of VMware Site Recovery Manager is out

The second edition of VMware SRM, officially named as VMware vCenter Site Recovery Manger 4, was released recently. The product is responsible for automated disaster recovery of complex virtual environments. The recent version is fully compatible with VMware vSphere platform and provides these new important features:
  • many-to-one failover - this means that one site is able to recover from multiple sites failures
  • expanded support of storage vendors - those who provides storage replication solutions over FC, iSCSI or NFS, 12 generally. Among them belongs DELL, IBM, HP, EMC, LSI and others.
For more information on VMware SRM 4 visit the release notes and product home page.

Tuesday, September 29, 2009

VMware Server 1.0.x library dependency problem

In the beginning of the year, I wrote this article about some problems between older VMware server 1.0.x and newer Linux distributions. The problem is related to the vmware kernel modules whose source code are not compatible with newer Linux kernels.

I was surprised with one thing. When I upgraded VMware Server from version 1.0.8 to 1.0.9, VMware Server console stopped working. The new version was installed on the same system (OpenSUSE 11.1) as the old one, so I don't understand the reason. The important thing is I found a solution. The new version began producing these new error messages after trying to run vmware command:
/usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1: version `GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6)
/usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1: version `GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6)
/usr/lib/vmware/bin/vmware: symbol lookup error: /usr/lib/libgio-2.0.so.0: undefined symbol: g_thread_gettime
I have tried to unset this environment variable influencing behavior of GTK2 applications:
unset GTK2_RC_FILES
Otherwise, the variable is referencing related gtkrc files defining GTK2 user's environment. Try it and I hope it will help.

Wednesday, September 2, 2009

Red Hat Enterprise Linux 5.4 released

Today, it was released a next minor version of Red Hat's flagship Linux distribution RHEL 5.4. Here it is a brief summary of new features and updates:
  • KVM hypervisor - Full support of Kernel-based Virtual Machine is included now. XEN support is included as well, but you can't use both XEN and KVM at the same time. Each hypervisor requires different kernel. You need to have 64b machine to run KVM. It supports RHEL 3/4/5 or Windows XP/2003/2008 as guests.
  • KVM paravirtualized drivers - They are available for Windows XP/2003/2008 in package virtio-win.
  • FUSE support - New version includes modules for Filesystem in Userspace (FUSE) and related utilities. Support for the XFS was added as well. It icnludes updates of CIFS and EXT4 filesystems.
  • Infiniband drivers - It contains some portions of prepared Open Fabrics Enterprise Distribution (OFED) 1.4.1.
New release of RHEL contains many other updates and enhancements which aren't mentioned here. For more details read the RHEL 5.4 official release notes.

Thursday, August 20, 2009

VMware vSphere hotplug

Hotplug of virtual hardware is attractive feature of VMware ESX 3.x/4.x. In case of ESX 3.x it is limited to hotadd of virtual disk to a running virtual machine only. With next generation of VMware vSphere hypervisor you are able to hotadd of memory or CPU to a machine if guest operating system supports it.

I was surprised during vSphere evaluation how it pretty works. I used to hotadd of virtual disks to my machines quite often. But when I upgraded 3.5 infrastructure to new vShpere 4.0, I became disappointed because it stopped working.

The reason why is simple. Hotadd feature is available from advanced edition only and I was upgrading to standard edition which doesn't contain license for it. You can check it in my previous post VMware vShpere 4.0 editions . Below is an error message which is complaining about missing license:

I think it wasn't a right decision to shift the feature to the higher editions. I think it would be better to leave things where they were because people are used to using them. And I hope that VMware will push back at least hotadd of virtual disk in some future release of vSphere.

Wednesday, August 19, 2009

Linux rc.local script

Sometimes, you need to run some commands during your Linux server startup. And you don't want to waste time with preparing valid init script now. The common task is to load some kernel module or to change speed of network interface and so on.

Red Hat distributions provides for this task rc.local script. You can find it in the directory /etc/rc.d. The script is executed after all the other init scripts. This is ensured with the proper START scripts linking to the /etc/rc.d/rc.local script:

/etc/rc.d/rc2.d/S99local
/etc/rc.d/rc4.d/S99local
/etc/rc.d/rc3.d/S99local
/etc/rc.d/rc5.d/S99local

SUSE distros like SLES or OpenSUSE provide similar mechanism. You have available two scripts. The before.local script should contain everything you want to run before runlevel is entered. The after.local script works like RedHat's rc.local script. It contains stuff which should be executed after runlevel is reached. The scripts don't exist by default, you need to create them at first in the directory /etc/init.d. They don't have to be set executable.

Besides this, the RedHat's rc.local script is executed only in runlevels 2, 3, 4 or 5. It is ignored in the single user mode. SUSE's version of after.local or before.local is interpreted during all runlevels including runlevel 1.

Wednesday, June 10, 2009

Solaris 10 updates summary

The seventh update of Solaris 10 was released on May. It contains support of Intel Nehalem CPU and some ZFS enhancements. I added it to the summary of Solaris updates .Here it is:
  1. Solaris 10 1/06 (u1) - GRUB bootloader, iSCSI initiator, fcinfo command
  2. Solaris 10 6/06 (u2) - ZFS filesystem
  3. Solaris 10 11/06 (u3) - Solaris Trusted Extensions, LDoms
  4. Solaris 10 8/07 (u4) - full TCP/IP stack in zones, iSCSI target, branded zones (Linux in Solaris container), Samba AD, enhanced rcapd
  5. Solaris 10 5/08 (u5) - Intel SpeedStep, AMD PowerNow!, Solaris 8/9 P2V (to Solaris 10 zones), CPU capping
  6. Solaris 10 10/08 (u6) - ZFS boot support, many ZFS filesystem enhancements
  7. Solaris 10 5/09 (u7) - performance and power management support for Intel Nehalem CPUs, support of ZFS clones when cloning zones, IPsec SMF services, SunVTS 7.0 update
For more details, click the particular release to read the official release notes.

Tuesday, June 9, 2009

VMware or Citrix?

Citrix released their virtualization solution named XenServer (from version 5, article XenServer is free) for free but only the time will show if it was a right decision. At first glance, it seems like a marvelous thing but there are some facts which should be investigated first. Together with XenServer, it was released central management solution XenCenter.

Let's have a look at their rival VMware (vSphere 4). XenServer is fully comparable to VMware ESX or ESXi. But what about XenCenter management? It's something more than VMware vSphere client but not so valuable as VMware vCenter Management Server. Citrix XenCenter is not the right choice in case of comparison to vCenter. The right one is Citrix Essentials but this one is not for free already. The main differences between Citrix XenCenter and Essentials are:
  • XenCenter is missing alerting capabilities like send me an email when "CPU usage is too high" or when some error condition like "virtual machine power on failure" appears
  • XenCenter is missing high availability support
  • XenCenter is not able to show you performance data older than one day for physical or virtual servers
Now, let's try to propose a simple high availability (HA) solution based on Citrix/VMware products and compare their prices. Let's suppose we have 2 (3) entry level servers where each have 2 CPUs with max 6 cores per CPU (6 CPUs total). The servers are connected to a shared disk storage. The CPU speed or memory capacity is not important now. And we require HA solution to protect our virtual machines from hardware failure. Follows the analysis:

- Citrix Essentials Enterprise (1 license = 1 server):
  • XenServer - 2 licenses = 0$ (3 lic = 0$)
  • Essentials Enterprise - 2 lic = 5500$ (3 lic = 8250$)
  • Essentials Preffered Support (optional) - 1 lic = 1500$
  • Total cost = 7000$ or 9750$ for 3 servers
  • Total cost without support = 5500$ or 8250$ for 3 servers
- VMware vSphere 4 Standard Edition (1 lic = 1 CPU):
  • vSphere 4 Standard - 4 licenses = 3180$ (6 lic = 4770$)
  • vShpere 4 Standard 1y Gold Support - 4 lic = 1092$ (6 lic = 1638$)
  • vCenter 4 Foundation - 1 lic = 1495$
  • vCenter 4 1y Gold Support - 1 lic = 545$
  • Total cost = 6312$ or 8448$ for 3 servers
  • Support is mandatory
- VMware vSphere 4 Essentials Plus Bundle (1 lic = 1 CPU)
  • Licenses for 3 hosts plus vCenter Server for Essentials plus 1y Gold Support = 3624$
  • Total cost = 3624$ for 2 or 3 servers

The prices of proposed solutions are quite different. In my opinion, the most valued solution is based on new VMware product line vSphere 4 Essentials.

There are rumors that VMware is the most expensive solution. I don't think so if I check the numbers above. Citrix's solution not covered by support is cheaper then VMware's solution with support but only for 2 servers. If I would like to add third server I would have to pay another license in case of Citrix. In case of VMware, I have still one spare license so I will use it. At first glance, XenServer seems to be free of charge but the price of added value by Citrix Essentials doesn't scale as well as in case of VMware vSphere 4 Standard Edition or vSphere 4 Essentials Plus. And what is your opinion to the topic?

Thursday, May 28, 2009

VMware vSphere - OVF support

OVF or Open Virtualization Format is an open DMTF standard with intention to package and distribute virtual machines or virtual appliances among various hypervisors independently on hypervisor and CPU architecture.

VMware supports OVF format and actively participate on its development . It is supported on ESX 3.5 and VirtualCenter 2.5 but the implementation doesn't support full OVF feature set. (draft standard, version 0.9). VMware vSphere 4.0 has full native support of OVF format in version 1.0. Beside, there exists standalone VMware OVF Tool 1.0 which brings OVF support for products like VMware Workstation or VMware Server.

OVF is a packaging format for software appliances. For example, it may contain tested LAMP stack prepared for simple deployment in production. It is a way how to transport virtual machine templates portably. OVF package may contain single or more virtual machines which must be installed (deployed) before they can be run. It is not run-time virtual machine format like VMDK. Further, it provides content verification and integrity checking.

Compared to VMDK format, OVF defines complete virtual machine - virtual hardware configuration including CPU, memory, storage, networking and virtual disks. On the other hand, VMDK is in charge of virtual disks only.

Are there available any OVF packages? Yes, for example there exists OVF of VMware vCenter 2.5 for Linux or vCenter Admin Portal and many others at VMware Virtual Appliance Marketplace.




Thursday, May 21, 2009

VMware vSphere - Fault Tolerance

VMware High Availability provides protection against physical servers failures running ESX hypervisors. If one host in HA cluster fails then failed virtual machines are restarted on another alive host from cluster. It ensures the host has enough resources to fulfill requirements of newly booted virtual machines. It is able to monitor virtual machine activity by checking its heartbeat as well and in case of its failure to restart it.

The next logical step is fault tolerant virtual environment. VMware vSphere 4 can do it . It provides zero downtime and data integrity of virtual machines in case of physical server failure.

When you configure a virtual machine to be fault tolerant a secondary duplicate machine is created on a different host. Then, any operation performed on the primary machine is recorded and replayed on its duplicate. If the primary fails the secondary takes over and continues running without interruption. However, current version is not able to monitor applications running inside virtual machines but it should be available in future.

VMware Fault Tolerance or VMware FT as it is denoted is cool and must have feature but to implement it means to meet these requirements:
  1. VM (virtual machine) must be in HA cluster
  2. esx host ssl certificates checking has to be enabled
  3. VM has to be stored on shared storage
  4. VM's virtual disks have to be in thick format, thin is not supported already
  5. three VMkernel 1G ports are required, one for VMotion and two for FT
  6. FT doesn't support Virtual SMP, only single processor VMs are supported!
  7. physical hosts have to support hardware assisted virtualisation, no problem with recent servers
The most of requirements are common ones but points 4 and 6, for older servers 7 as well, are considerlable limitations. It's not so simple to implement VMware FT but I hope it will get better in next releases.

Finally, vSphere documentation is available at vmware.com.

Tuesday, May 19, 2009

RHEL 4.8 released

Yesterday, a next minor version of Red Hat Enterprise Linux 4 was released. The new version 4.8 contains the foloowing updates and enhancements:
  • optimized drivers for RHEL 4 guests running on KVM hypervizor
  • SAMBA update for better interoperability with Windows world
  • new kernel tunables for better performance
For details, there are official release notes published at redhat.com.

Wednesday, April 29, 2009

VMware vSphere 4.0 editions

Yesterday, VMware uncovered new pricing and licensing model of vSphere 4.0 platform. In my opinion, VMware is trying to strictly split up the virtualization market into two parts - SMB and enterprise. I have a feeling from the table of features below that there is growing a hole between them. The competitors should catch the chance to fill it up.
  • SMB editions - ESXi Single Server, Essentials and Essentials Plus
  • Enterprise editions - Standard, Advanced, Enteprise and Enterprise Plus
Here is the mentioned table of features:

Thursday, April 16, 2009

Linux kernel crash dumps with kdump

Kdump is official GNU/Linux kernel crash dumping mechanism. It is part of vanilla kernel. Before it, there exists some projects like LKCD for performing such things. But they weren't part of mainline kernel so you needed to patch the kernel or to rely on Linux distribution to include it. In the event of LKCD, it was difficult to configure it, especially which device to use for dumping.

The first notice about kexec (read what it is useful for and how to use it) in GNU/Linux kernel was in changelog of version 2.6.7. Kexec tool is prerequisite for kdump mechanism. Kdump was firstly mentioned in changelog of version 2.6.13.

How is it working? When the kernel crashed the new so called capture kernel is booted via kexec tool. The memory of previous crashed kernel is leaved intact and the capture kernel is able to capture it. In detail, first kernel needs to reserve some memory for capture kernel. It is used by capture kernel for booting. The consequence is the total system memory is lowered by reserverd memory size.

When the capture kernel is booted, the old memory is captured from the following virtual /proc files:
  • /proc/vmcore - memory content in ELF format
  • /proc/oldmem - really raw memory image!

Next, we will check how to initialize kdump mechanism, how to configure it and how to invoke it for testing purposes.

Wednesday, April 15, 2009

New Sun Fire servers with Xeon 5500

Sun has released a new line of servers and blade modules based on Intel Xeon 5500-series processors. The new pieces are:
  • Sun Fire X2270 (1RU, 1 or 2 CPUs)
  • Sun Fire X4170 (1RU, 1 or 2 CPUs)
  • Sun Fire X4270 (2RU, 1 or 2 CPUs, 16 2.5" disks)
  • Sun Fire X4275 (2RU, 1 or 2 CPUs, 12 3.5" disks)
  • Sun Blade X6270 (1 or 2 CPUs)
  • Sun Blade X6275 (4 CPUs)
The official announcement of new servers with additional details is published at www.sun.com.

Wednesday, April 8, 2009

Sun VirtualBox 2.2 released

The next version of Sun's desktop hypervisor VirtualBox was released. The new version 2.2 brings the following important changes:
  • OpenGL 3D acceleration for Linux/OpenSolaris guests
  • OVF appliance import/export
  • USB and shared folder support for OpenSolaris
  • host-only networking mode
More details are in official changelog.

Wednesday, April 1, 2009

VMware ESX and SATA controllers

VMware ESX hypervisor has supported only SCSI internal drives for a long time. The third update of ESX hypervisor introduced support for some SATA controllers like Intel ICH-7. The newest fourth update contains support of ICH-9 and ICH-10 chipsets as well. The same holds for ESXi platform.

The big difference is what SATA mode is supported. For example, the ICH-7 chipset is supported in IDE/ATA mode only, so you can't use use connected hard drives but you can access connected optical drives. The rest of the chipsets is supported in AHCI or Advanced Host Controller Interface mode. In this mode, you can access internal SATA drives.

When IDE/PATA mode is used, you will be able to see internal SATA (or emulated PATA) drives but you can't use it as VMFS storage. VMFS filesystem can be created on SCSI-based disks only.

There exists a nice knowledge base article about the topic. To better understand it, I borrowed an image from the article which is quite self-explanatory:


VMware ESX/ESXi 3.5 update 4 released

The fourth update of VMware ESX platform was released. It contains many hardware enhancements like support of new Intel Xeon 5500 procesors, SATA controllers or network interface cards. It supports new guests as well like SLES 11 released recently. The official release notes provide more comprehensive information.

Tuesday, March 24, 2009

SLES 11 released

Good news for SLES fans. The next major release of the product was released today. Together with SLES 11 was released enterprise-ready desktop SLED 11. Other two new products were announced as well:
  • SUSE Linux Enterprise High Availability Extension - the products integrates clustering filesystem OCFS2, cluster-aware volume manger cLVM2, distributed replicated block device DRBD and Pacemaker Cluster Stack with OpenAIS messaging and member layer. Included DRBD version 8 supports active-active replication.
  • SUSE Linux Enterprise Mono Extension - the product provides open-source cross-platform .NET framework.
What other benefits does SLES 11 bring? As you could see above, it is more modular. Some features were bundled into separate products. The next follow:
  • it is based on GNU/Linux kernel 2.6.27
  • in addition to AppArmor, it is SELinux ready
  • it provides OFED 1.4 (more about it here)
  • package management is based on fast update stack ZYpp
  • SLES 11 is greener - it supports tickless idle which is able to leave CPU in saving stake longer or it provides more granular power profiles
  • it supports swapping over NFS for diskless clients
  • it supports partitioning multiprocessor machine by CPUset System
  • virtualization layer is based on Xen 3.3
  • it is optimised for hypervisors VMware ESX, MS Hyper-V and Xen
  • default filesystem is EXT3
  • it supports kexec, kdump or SystemTap
  • it contains many other enhancements of asynchronous I/O, MPIO, NFS or iSCSI
The official product documentation isn't available yet. The release notes are here.

Quickly - SLES 10 reactivation

If you need to assign an already registered SLES 10 system to a new or different subscription the quickest way how to do it is to use suse_register command from console:
suse_register -i -f
The -f switch forces registration and -i runs registration interactively. The registration form will be available via lynx text web browser. Prepare your activation code and finish the registration.

Thursday, March 12, 2009

Running Linux kexec

The generic form of kexec command looks like
kexec -l kernel_image --initrd=kernel_initrd --append=command_line_options
The command has available many other options but the presented ones are the most important. To start kernel reset, run
kexec -e
How does it work? Linux kernel is placed in memory at defined address offset. On x86 architecture, it begins at 0x100000. Kexec is capable to call and run another kernel in the context of current kernel. It copies the new kernel somewhere into memory, moves it into kernel dynamic memory and finally copies it to the final destination which is the offset and runs it - kernel is exchanged and the reset is performed. An example how to reset running SLES 10.x kernel follows
kversion=`uname -r`
kexec -l /boot/vmlinuz-$kversion --initrd=/boot/initrd-$kversion --append="`cat /proc/cmdline`"
kexec -e
The example for RHEL 5.x is slightly different:
kexec -l /boot/vmlinuz-$kversion --initrd=/boot/initrd-${kversion}.img --append="`cat /proc/cmdline`"

Does it have any drawbacks? As I said, there may be some buggy devices which won't work after kernel reset. Typically, there are troubles with VGAs and their video memory initialization which results in garbled console after reset. The recommendation is to use normal video mode for console. You can change it with vga parameter set to zero and passed as kernel options (e.g. SLES 10 uses video framebuffer by default)
vga=0
Next, the earlier version of kexec had stability issues on any other platform than x86. Today, kexec is uspported on x86, x86_64, ppc64 or ia64.

Tuesday, March 10, 2009

Fast linux reboot with kexec

Kexec is a GNU/Linux kernel feature which allows to perform kernel reboots faster. The time savings around a few minutes are the result of not performing BIOS procedures and hardware reinitialization (each hardware part - like SCSI/FC HBAs - may have own BIOS and POST which takes some amount of time to finish). As we have cold or warm reset we can newly say we have kernel reset.

The GNU/Linux boot process consists of several stages. The hardware stage, firmware stage and bootloader stage are kernel independent and are run in defined order. The hardware stage performs basic hardware tasks such device initialization and testing it. The firmware stage known on PCs as BIOS is in charge of hardware detection. The bootloader can be split into two parts. The first-level bootloader is like master boot record on PCs which calls second-level bootloader which is able to boot Linux kernel. The final stage is kernel stage.

Kexec is smart thing. It can bypass all listed stages up to kernel stage. That means it is able to bypass all the things connected with hardware and jump to the kernel stage directly. The risk is a likely unreliability of untouched devices, typically VGAs or some buggy cards.

What about requirements to try it? The kernel has to be kexec-capable plus you have to have installed kexec tools. It is not problem in today's Linux distributions. Both RHEL 5.x and SLES 10.x contains kexec-tools package which you have to install. Their production kernels are capable of doing kernel resets. On SLES 10, you can check the running kernel configuration for CONFIG_KEXEC variable.
zgrep CONFIG_KEXEC /proc/config.gz


Kexec is controlled with command line program kexec. The command takes defined values for kernel to be booted, its initrd and kernel parameters and starts the kernel reset.

Friday, March 6, 2009

VMware ESX 4.0 aka vSphere 4.0 platform

The next major release of VMware ESX platform is being prepared. The platform newly called as vSphere 4.0 is going to be based on six stones which provide:
  1. vCompute - virtualization layer, hypervisor, live migration
  2. vStorage - storage management, replication
  3. vNetwork - network management, distributed switch, Cisco Nexus switch
  4. Availability - clustering, data protection
  5. Security - VMsafe APIs, vShield Zones
  6. Scalability - dynamic resource management, distributed power management
Furthermore, the new platform will support virtual machines with 8 virtual CPUs and 256 GB of virtual memory.

The second most important part of virtual environment is centralized management. Today, we know it as VMware VirtualCenter Server. In the future, it should be called vCenter Suite. The good news, it will be available for Linux servers as well so no more Windows licenses are required.

Thursday, March 5, 2009

XenServer 5 license key

As you know, XenServer Enterprise Edition was realased for free. The license key for enterprise features and installation media are available from the download page. The new free XenServer will be released at March 25. The provided license provides high availability and StorageLink features from incoming Citrix Essentials for XenServer as well. The update to new version will be possible.

Wednesday, March 4, 2009

VCB, vcbMounter, vcbRestore ... updated

I have added another article dedicated to VMware VCB and backups over Samba or Windows shares. Here is updated list of them:
  1. VM identification - how to identify a virtual machine you intend to backup? The command vcbvmname is the answer.
  2. VM full backup - how to perform a full backup of the chosen virtual machine? The vcbmounter command can do it.
  3. VM full backup data access - how to retrieve data from the virtual machine's full backup? It is possible to mount the backup image with the mountvm command.
  4. VM file level backup - the vcbmounter command is able to perform file-level backup as well.
  5. VM backup over NFS - this article describes a simple scenario of virtual machine backup over NFS protocol.
  6. VM backup restore - it is important to know the process of restoring a virtual machine from the backup. You can use vcbrestore.
  7. VM backup with Samba or Windows share - the other approach how to perform backups of virtual machines is to use Samba or Windows shares instead of NFS server.

Friday, February 27, 2009

VCB basic usage - VM full backup with Samba

In the previous article about VMware VCB, I wrote about full backups to NFS shares. For completeness, I decided to write another one dedicated to backups to Samba or Windows shares.

The idea of backup is the same. Let's have a Samba server available at IP address 192.168.1.1. The exported directory for backups is backup-smb and the user which has write access to this share is backup.

Before we will be able to continue we need to allow smbclient to access Samba server. You can perform it from VI client or directly from ESX service console via esxcfg-firewall command. First, let's check if smbclient is allowed:
esxcfg-firewall -q smbClient
The output of command should be by default:
Service smbClient is blocked.
To reconfigure ESX firewall to allow smbclient access use the next command:
esxcfg-firewall -e smbClient
Now, you should be able to browse the server (the command asks for user's password first):
 smbclient  -L 192.168.1.1 -U backup
The example command output follows (Samba server on SLES10):
Domain=[NAS] OS=[Unix] Server=[Samba 3.0.28-0.2-1625-SUSE-CODE10]
Sharename       Type      Comment
--------- ---- -------
profiles Disk Network Profiles Service
backup-smb Disk
IPC$ IPC IPC Service (Samba 3.0.28-0.2-1625-SUSE-CODE10)
Domain=[NAS] OS=[Unix] Server=[Samba 3.0.28-0.2-1625-SUSE-CODE10]
Now, we are ready to create a simple backup script:
#!/bin/sh

BACKUP_SERVER="192.168.1.1"
BACKUP_USER="backup"
BACKUP_PASS="backup"
SMB_SHARE="backup-smb"
MOUNT_DIR="/backup"

[ -d $MOUNT_DIR ] || mkdir -p "$MOUNT_DIR" || exit 1

VM_BACKUP="`vcbVmName -s any: | grep name: | cut -d':' -f2`"

if [ ! -z "$VM_BACKUP" ]; then
smbmount //${BACKUP_SERVER}/$SMB_SHARE $MOUNT_DIR \
-o username=${BACKUP_USER},password=$BACKUP_PASS || exit 1

for VM in $VM_BACKUP; do
vcbMounter -a name:$VM -r $MOUNT_DIR/$VM
done

umount $MOUNT_DIR
fi

exit 0
It is simple, isn't it? The code is almost the same as for backups over NFS. We added variables defining our Samba user and his password. The mount command was exchanged with smbmount which is CLI Samba client. If you insist on using the mount command replace the line mounting the backup-smb share with line:
mount -t smbfs //${BACKUP_SERVER}/$SMB_SHARE $MOUNT_DIR \
-o username=${BACKUP_USER},password=$BACKUP_PASS || exit 1
That's all. In such simple backup scenarios I prefer NFS usage because it is simple to set and provides higher throughput than SMB protocol. On the other hand, SMB protocol provides basic authentication mechanism (if you don't disable it).

Monday, February 23, 2009

XenServer is free

It's unbelievable! Citrix decided to release their XEN based hypervisor and complete virtualization solution named XenServer for free a few hours ago (official announcement is here). The product was available in three editions until recently - Express, Standard, Enterprise and Platinum. The differences are outlined in the following table:

The Express edition was free of charge so far but it was missing some fundamental enterprise features like resource pools, live migration or central management console XenCenter. These features are paid. Or better, they were paid.

From now, we have only one edition of XenServer including features of enterprise edition. Everything is free and you can download it. Cool! You don't have to spend any money on virtual machines live migration, resource pools or central management stuff. What happens if we compare it with VMware ESXi? In my opinion, it seems the king might be dead. And the new king might be coming.

What do you think of it? What will be the answer from VMware? I think it is smart way how to show us that XEN based hypervisors are enterprise ready and how to spread it more. In connection with current economical situation they have the real chance to success.

Let me have final question. Who will need Microsoft Hyper-V now? If XenServer is free and because it is more mature and robust than Hyper-V what will be its new position? Today, the winner is Citrix. Tomorrow, the oponents might surprise us. But don't miss the opportunity today. Download XenServer and spread it!

Wednesday, February 18, 2009

VMware vCenter Converter 4.0 was released

The previous version of Converter was at 3.0.3 for a long time. The new standalone version is much similar to the one included in Virtual Infrastructure 3.5 (VI 3.5).

Before it, there were available two editions - Starter and Enterprise where the second one is part of VI 3.5. Here are the additional features of Enterprise edition compared to Starter:
  • it supports multiple migration jobs
  • it supports cold migration
  • it is part of VI3.5 only (particularly VirtualCenter server)
What new brings us latest revision? It is free of charge, it has larger set of supported operating systems as source or it allows you to select the target virtual disks. Newly, it can migrate sources with RedHat, SUSE or UBUNTU Linux. Furthermore, it is able to power off the source after migration finishes. The more comprehensive comparison of the version 4.0 and the version included in VI3.5 is presented by this picture.

Monday, February 9, 2009

Aligning VMFS partition

Proper alignment of filesystem on disk partition may bring some I/O performance improvements. Typically, the reason for it is caused by creating RAID device underneath the accessed disk which can stripe data in chunks of some defined size. The typical size of chunk is 64KB. As you know, no partition is placed at the raw beginning of disk because there needs to be written some metadata like MBR or partition table. It is clear now that default aligning may results in latency increase and so in lower throughput.

The same holds for VMFS filesytem, for both versions 2 and 3. The general rule is to align VMFS partition on the 64KB boundary. The problem is default partition alignment by VMware ESX installer (or Red Hat Anaconda). It doesn't count with it and it layouts the disk partitions one by one. If you create VMFS filesystem from VirtualCenter client, it starts from 64KB. Follows output of fdisk -lu command from testing system:
Disk /dev/sda: 146.6 GB, 146685296640 bytes
255 heads, 63 sectors/track, 17833 cylinders, total 286494720 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 63 208844 104391 83 Linux
/dev/sda2 208845 10442249 5116702+ 83 Linux
/dev/sda3 10442250 281105369 135331560 fb Unknown
/dev/sda4 281105370 286487144 2690887+ f Win95 Ext'd (LBA)
/dev/sda5 281105433 282213854 554211 82 Linux swap
/dev/sda6 282213918 286294364 2040223+ 83 Linux
/dev/sda7 286294428 286487144 96358+ fc Unknown

Disk /dev/sdb: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders, total 251658240 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 128 251658224 125829048+ fb Unknown
The first disk /dev/sda is internal one and it was partitioned by ESX installer. The VMFS partition has ID fb. The second disk was initialized from VirtualCenter. It belongs to an external disk array. The starting sector is 128, so it is aligned to 128 x 512B (sector size) = 64KB. The first VMFS partition on /dev/sda is not aligned because 10442250 divided by 128 doesn't give an integer.

There is no destructive way how to realign not optimally aligned VMFS partitions. You need to recreate the partitions from scratch. It requires to backup ESX system, VMFS filesystems, realign the partitions and restore the backup.

There is not defined that every disk or disk array has the alignment boundary at 64KB. It is required to discuss it with system guides. But 64KB is good starting point and it is the most common value. The question is if it is worthwile to perform it because average performance benefit is around 10%.

I drew on more comprehensive guide about the topic published at www.vmware.com. It contains details about test environment, guest filesystem alignment or steps how to layout partitions with fdisk so read it if you are interested in.


Tuesday, February 3, 2009

Sun xVM Server postponed

While management system Ops Center 2.0 was released recently, it seems Sun has some issues with their XEN based hypervisor. According to this article published by french magazine LeMagIT it is going to be released in second quarter 2009.

Licensing open source

I was considering to write this article a while because it doesn't fit in any type of article I have published before. And it isn't my primary business to discuss various open source licensing here. The thing is, it is useful to understand the role of them but it is often quite difficult to imagine what they just want to say. Sometimes, I have a feeling you need a lawyer education to understand them.

You know the obvious questions like "why it has to be GPLed?", "why this license is not compatible with that one?" or "why it can't be part of Linux kernel?". You know the open source license ensures the availability of source code which you can modify and redistribute. The true pitfalls begin appearing when you would like to integrate two products available under two different licenses. To make things clearer I borrowed these two comprehensive schemes from chandanlog at Sun blogs. The first one presents general attitude of open source licenses and classical EULA to source code. The second one explains differences of open source licenses. They are quite minor but may have out of sight consequences.
Let's try to apply the licensing rules to the problem of releasing ZFS filesystem with Linux kernel. What's the problem? First, Sun owns some patent rights which prohibit such action. Second, as Linux kernel is GPLized anything included has to be GPLized as well. ZFS is covered with CDDL license which requires to be preserved. From here, I see the main reason of incompatibility. But if I realize there are other binary only modules like video drivers from ATI or NVIDIA which are linked with kernel via some sort of GPLized open source wrapper why we can't do it the same way with ZFS?!? The question is if it is legal.

The two practical schemes makes me to understand the topic more deeply. The example with ZFS made the situation complicated and I need to find out something what shows me that it is not. I hope you will find these graphical explanations as useful as mine. And check the chandanlog who created them!

Wednesday, January 28, 2009

SLES 10 update - Service Pack 2

Our final step is to move the system from SP1 to SP2. I have to mention the choice you can use zypper tool from now. The command syntax and parameters are almost the same. Nevertheless, I'm going to continue with rug.

It holds the same for SLES 10 SP2. It is separated product with separated update source tree. Let's begin.
  1. Again, subscribe to SP2 update and install source and to available catalogs
    rug sa ftp://suse.mydom.com/update/SLES10-SP2-Updates/sles-10-i586 --type yum update-sp2
    rug sa ftp://suse.mydom.com/update/SLES10-SP2-Online/sles-10-i586 --type yum install-sp2
    rug sl

    # | Status | Type | Name | URI
    --+--------+------+------------+------------------------------------------------
    1 | Active | ZYPP | online | ftp://suse.mydom.com/install/i386/sles10
    2 | Active | YUM | update | ftp://suse.mydom.com/update/SLES10-Updates...
    3 | Active | YUM | update-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Updates...
    4 | Active | YUM | online-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Online...
    5 | Active | YUM | update-sp2 | ftp://suse.mydom.com/update/SLES10-SP2-Updates...
    6 | Active | YUM | online-sp2 | ftp://suse.mydom.com/update/SLES10-SP2-Online...


    rug sub online-sp2
    rug sub update-sp2
    rug ca

    Sub'd? | Name | Service
    -------+------------+-----------
    Yes | online | online
    Yes | update | update
    Yes | update-sp1 | update-sp1
    Yes | online-sp1 | online-sp1
    Yes | update-sp1 | update-sp2
    Yes | online-sp1 | online-sp2
  2. Perform update
    rug up -y

    Resolving Dependencies...

    The following packages will be installed:
    aaa_base 10-12.47 (ftp://suse.mydom.com/update/SLES10-SP2-Updates/sles-10-i586)
    ...
    ...
    Downloading Packages...
    ...
    Transaction...
    ...
    Finishing...
    Transaction Finished
  3. Move to SP2 product
    rug in -y -t patch product-sles10-sp2

  4. Verify the new version
    SPident

    CONCLUSION: System is up-to-date!
    found SLE-10-i386-SP2 + "online updates"
If you are not using local update server but official ones like nu.novell.com you can still follow the same steps. It is quite simpler because you don't have to add new update and install sources by hand but just use switch-update-server and move-to-sles10-sp1 or move-to-sles10-sp2 patches which prepare the current system for transition from GA to SP1 and from SP1 to SP2.
  1. Before you start update install switch-update-server patch and prepare the system
    rug in -y -t patch switch-update-server
    /usr/bin/switch-update-server

    rug sub SLES10-Updates
    rug in -y -t patch move-to-sles10-sp1
  2. Perform the similar steps for SP2
  3. Continue with update the same way as shown in the article
Perhaps, it would be interesting to compare whole process with updating other enterprise distribution like RHEL now. How difficult it is and so on.

In the end, I would like to mention the main source of information the article is based on. The official documentation for update from SLES 10 GA to SP1 and from SP1 to SP2 is published at www.novell.com:
  1. How to update to SLES/SLED 10 SP1
  2. How to update to SLES/SLED 10 SP2

Monday, January 26, 2009

VirtualCenter for Linux

It seems VirtualCenter Server for GNU/Linux is being prepared and might be released with the next version of Virtual Infrastructure or its successor called VMware vSphere 4.0. It's going to be presented at the incoming virtualisation event VMworld Europe 2009. The official abstract of technical session covering this topic is published at VMworld Europe 2009 website.

Thursday, January 22, 2009

New RHEL 5.3

The next minor update of Red Hat Enterprise Linux was released recently. About its predecessor - RHEL 5.2 - I wrote here a few months ago.

So what news does it bring? Let's have a look at some of them:
  • it's mainly update release - there are updated packages providing auditd, NetworkManger or sudo
  • it contains many virtualization enhancements - the number of supported physical CPUs or maximum memory are increased, support of new Intel x86-64 CPUs is included
  • it is the first realase with OpenJDK JAVA implementation!!!
  • it contains enhanced Systemtap (aka dtrace for Linux)
For more details, there are official release notes and article from Red Hat NEWS.

Tuesday, January 20, 2009

SLES 10 update - Service Pack 1

Keep in mind, SLES 10 GA and SLES 10 SP1 are treated as separated products. We need to subscribe to new installation and update sources and repeat the previous steps with some little additions. My installation source of SLES 10 SP1 is part of update server. The update server is synchronized from official Novell update server with YUP proxy.
  1. Subscribe to SLES 10 SP1 installation and update source, then subscribe to catalogs
    rug sa ftp://suse.mydom.com/update/SLES10-SP1-Updates/sles-10-i586 --type yum update-sp1
    rug sa ftp://suse.mydom.com/update/SLES10-SP1-Online/sles-10-i586 --type yum install-sp1
    rug sl

    # | Status | Type | Name | URI
    --+--------+------+------------+------------------------------------------------
    1 | Active | ZYPP | online | ftp://suse.mydom.com/install/i386/sles10
    2 | Active | YUM | update | ftp://suse.mydom.com/update/SLES10-Updates...
    3 | Active | YUM | update-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Updates...
    4 | Active | YUM | online-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Online...

    rug sub online-sp1
    rug sub update-sp1
    rug ca

    Sub'd? | Name | Service
    -------+------------+-----------
    Yes | online | online
    Yes | update | update
    Yes | update-sp1 | update-sp1
    Yes | online-sp1 | online-sp1


  2. First, install required Zenworks management Agent patch otherwise there is a risk rug won't be working properly
    rug in -y -t patch slesp1o-liby2util-devel
  3. Restart zmd service and perform update
    rczmd restart
    rug up -y

    Resolving Dependencies...

    The following packages will be installed:
    aaa_base 10-12.33.3 (ftp://suse.mydom.com/update/SLES10-SP1-Online/sles-10-i586)
    ...
    Downloading Packages...
    ...
    Transaction...
    ...
    Finishing...
    Transaction Finished
  4. Finally, move the system to SP1 version
    rug in -y -t patch product-sles10-sp1
  5. Check the system version
    SPident

    CONCLUSION: System is up-to-date!
    found SLE-10-i386-SP1 + "online updates"
The result is we have system with SLES 10 SP1 and all required updates applied. It is recommended to reboot such system to apply all included changes (especially new kernel).

Thursday, January 15, 2009

Sun xVM Server 1.0 delayed?

Sun released their unified multiplatform management system for physical and virtual servers a few months ago but they still lacks of their own hypervisor called xVM. The release of xVM Server is planned during the first quarter of 2009 and it seems to be delayed now. Nevertheless, we can make some conclusion about the product now:
  • it supports MS Windows, Linux and Solaris guests
  • it is vmware compatible so you can directly use available vmware appliances
  • it has built-in web-based management
  • it supports virtual SMP (2 virtual CPUs)
  • it supports live migration
  • it supports resource pools
  • it should be released with GPL3 licence at no cost (that's not a surprise as it is based on XEN)
I believe it will be released as soon as possible. Today, there are available source codes of xVM only. Binaries will be available with the oficial release of xVM.

Wednesday, January 14, 2009

SLES 10 update - GA update

Among basic administration tasks of Linux system administration belongs its regular update. Each distribution has its own way how to perform it. Update of SLES 10 is not as straightforward as many of us expect so I decided to make a summary of this procedure.

I'll be doing it with rug command, not via graphical YaST. From SLES 10 SP1 you can use zypper command which is much faster than rug and it is fully independent of Novell Zenworks Linux Management Agent. If you don't use Novell Zenworks for managing your Linux systems then you can afford to disable zmd service and to use zypper tool only. To make the update more faster I'll be using local update server at URL ftp://suse.mydom.com (you can deploy your own with YUP - Yum Update Proxy).

Let's begin with initial installation of SLES 10 GA (particularly i386 platform but it's the same for the others).
  1. First, identify the current system
    SPident

    CONCLUSION: System is up-to-date!
    found SLES-10-i386-current
  2. Subscribe to SLES 10 GA installation source (it may be required for dependencies during update)
    rug sa ftp://suse.mydom.com/install/i386/sles10 --type zypp online
  3. Subscribe to SLES 10 update source
    rug sa ftp://suse.mydom.com/update/SLES10-Updates/sles-10-i586 --type yum update
  4. Check subscribtions
    rug sl

    # | Status | Type | Name | URI
    --+--------+------+--------+----------------------------------------------------
    1 | Active | ZYPP | online | ftp://suse.mydom.com/install/i386/sles10
    2 | Active | YUM | update | ftp://suse.mydom.com/update/SLES10-Updates...
  5. Check available catalogs
    rug ca

    Sub'd? | Name | Service
    -------+--------+--------
    | update | update
    | online | online
  6. Subscribe to the catalogs and check them
    rug sub update
    rug sub online
    rug ca

    Sub'd? | Name | Service
    -------+------------+-----------
    Yes | update | update
    Yes | online | online
  7. Update SLES 10 GA system
    rug up -y

    Resolving Dependencies...

    The following packages will be installed:
    apache2 2.2.3-16.2 (ftp://suse.mydom.com/update/SLES10-Updates/sles-10-i586)
    ...
    ...
    Downloading Packages...
    ...
    Transaction...
    ...
    Finishing...
    Transaction Finished
  8. Check the version after update finished successfully
    SPident

    CONCLUSION: System is up-to-date!
    found SLES-10-i386-current + "online updates"
Now, our SLES 10 system is ready for transition to Service Pack 1. Reboot the system before we proceed.

Thursday, January 8, 2009

VMware Server 1.0.8 on openSUSE 11.1

I decided to upgrade my laptop system from almost "prehistoric" openSuSE 10.1 to the newest version 11.1. It was quite successful but I had to resolve an issue with VMware Server 1.0.8 which I am used to using in my work a lot.

The whole configuration process crashed on vmware kernel modules compilation. The kernel version in new openSUSE is 2.6.27.7. As there aren't precompiled modules for it within version 1.0.8 they need to be recompiled at first. Don't forget to have installed kernel-source, make, gcc and patch packages. Secondly, you need to configure installed kernel sources with make cloneconfig to correspond with the running kernel and platform. Finally, configure VMware Server installation. Everything follows here:
zypper in -y kernel-source make gcc patch
cd /usr/src/linux
make mrproper; make cloneconfig
vmware-config.pl
But the last command produces these errors:
Building the vmmon module.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config2/vmmon-only'
make -C /lib/modules/2.6.27.7-9-pae/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-2.6.27.7-9-obj/i386/pae'
make -C ../../../linux-2.6.27.7-9 O=/usr/src/linux-2.6.27.7-9-obj/i386/pae/. modules
CC [M] /tmp/vmware-config2/vmmon-only/linux/driver.o
In file included from /tmp/vmware-config2/vmmon-only/./include/x86.h:20,
from /tmp/vmware-config2/vmmon-only/./include/machine.h:24,
from /tmp/vmware-config2/vmmon-only/linux/driver.h:15,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:49:
/tmp/vmware-config2/vmmon-only/./include/x86apic.h:79:1: warning: "APIC_BASE_MSR" redefined
In file included from include2/asm/fixmap_32.h:29,
from include2/asm/fixmap.h:5,
from include2/asm/apic.h:9,
from include2/asm/smp.h:13,
from /usr/src/linux-2.6.27.7-9/include/linux/smp.h:28,
from /usr/src/linux-2.6.27.7-9/include/linux/topology.h:33,
from /usr/src/linux-2.6.27.7-9/include/linux/mmzone.h:687,
from /usr/src/linux-2.6.27.7-9/include/linux/gfp.h:4,
from /usr/src/linux-2.6.27.7-9/include/linux/kmod.h:22,
from /usr/src/linux-2.6.27.7-9/include/linux/module.h:13,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:12:
include2/asm/apicdef.h:134:1: warning: this is the location of the previous definition
In file included from /tmp/vmware-config2/vmmon-only/./include/machine.h:24,
from /tmp/vmware-config2/vmmon-only/linux/driver.h:15,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:49:
/tmp/vmware-config2/vmmon-only/./include/x86.h:830:1: warning: "PTE_PFN_MASK" redefined
In file included from include2/asm/paravirt.h:7,
from include2/asm/irqflags.h:55,
from /usr/src/linux-2.6.27.7-9/include/linux/irqflags.h:57,
from include2/asm/system.h:11,
from include2/asm/processor.h:17,
from /usr/src/linux-2.6.27.7-9/include/linux/prefetch.h:14,
from /usr/src/linux-2.6.27.7-9/include/linux/list.h:6,
from /usr/src/linux-2.6.27.7-9/include/linux/module.h:9,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:12:
include2/asm/page.h:22:1: warning: this is the location of the previous definition
In file included from /tmp/vmware-config2/vmmon-only/linux/vmhost.h:13,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:71:
/tmp/vmware-config2/vmmon-only/./include/compat_semaphore.h:5:27: error: asm/semaphore.h: No such file or directory
/tmp/vmware-config2/vmmon-only/linux/driver.c:146: error: unknown field 'nopage' specified in initializer
/tmp/vmware-config2/vmmon-only/linux/driver.c:147: warning: initialization from incompatible pointer type
/tmp/vmware-config2/vmmon-only/linux/driver.c:150: error: unknown field 'nopage' specified in initializer
/tmp/vmware-config2/vmmon-only/linux/driver.c:151: warning: initialization from incompatible pointer type
/tmp/vmware-config2/vmmon-only/linux/driver.c: In function 'LinuxDriver_Ioctl':
/tmp/vmware-config2/vmmon-only/linux/driver.c:1670: error: too many arguments to function 'smp_call_function'
make[4]: *** [/tmp/vmware-config2/vmmon-only/linux/driver.o] Error 1
make[3]: *** [_module_/tmp/vmware-config2/vmmon-only] Error 2
make[2]: *** [sub-make] Error 2
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/linux-2.6.27.7-9-obj/i386/pae'
make: *** [vmmon.ko] Error 2
make: Leaving directory `/tmp/vmware-config2/vmmon-only'
Unable to build the vmmon module.
The compilation of vmmon module crashed because of incompatibility between the kernel version and available vmmon module. The solution is to download updated version of modules vmware-update-2.6.27-5.5.7-2 and update them:
wget http://www.insecure.ws/warehouse/vmware-update-2.6.27-5.5.7-2.tar.gz
tar zxfv vmware-update-2.6.27-5.5.7-2.tar.gz
cd vmware-update-2.6.27-5.5.7-2
./runme.pl
This update updates all required modules and configuration script vmware-config.pl. After that, the compilation of vmmon module is successful and you can finish the configuration. I hope it will help you.

Tuesday, January 6, 2009

Running NTPD inside XEN domU or not?

There is a question how to configure ntpd time synchronization daemon inside Linux domU. Is it better to guarantee the proper time of dom0 via ntpd and rely on automatic time synchronization between domU and dom0? Or is it preferable to make the domU clock independent of dom0?

I'm not sure with the right answer. I'm used to configuring ntpd daemon of each Linux system the same way. That means one configuration is suitable almost for each system. So I would rather use the second way to do it. Before it, you need to tell the system to make the domU clock independent with Linux sysctl interface:
echo "xen.independent_wallclock = 1" >> /etc/sysctl.conf
sysctl -p
After the above action, you can configure ntpd as you wish. For sure, check the set value with
sysctl xen.independent_wallclock