Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Tuesday, May 17, 2011

CentOS 6?

I'm a big fan of CentOS project. I use it in production and I recommend it to the others as an enterprise ready Linux distro. I have to admit that I was quite disappointed by the behaviour of  project developers who weren't able to tell the community the reasons why the upcoming releases were and are so overdue. I was used to downloading CentOS  images one or two months after the current RHEL release was announced. The situation has changed with RHEL 5.6 which is available since January, 2011 but the corresponding CentOS was released not before April, 2011. It took about 3 months to release it instead of one or two as usual. By the way, the main news in RHEL 5.6 are:
  • full support for EXT4 filesystem (included in previous releases as technical preview)
  •  new version 9.7 of BIND nameserver supporting NSEC3 resource records in DNSSEC and new cryptographic algorithms in DNSEC and TSIG
  • new version 5.3 of PHP language
  • SSSD daemon centralizing identity management and authentication
More details on RHEL 5.6 are officically available here.

The similar or perhaps worse situation was around the release date of CentOS 6. As you know, RHEL 6 is available since November, 2011. I considered CentOS 6 almost dead after I read about transitions to Scientific Linux or about purchasing support from Red Hat and migrating the CentOS installations to RHEL . But according to this schedule people around CentOS seem to be working hard again and the CentOS 6 should be available at the end of May. I hope the project will continue as I don't know about better alternative to RHEL (RHEL clone) than CentOS. The question is how the whole, IMO unnecessary situation, will influence the reputation of the project.

Tuesday, May 3, 2011

Quickly - persistent modules loading on RHEL

The kernel modules required for booting the system up are part of an initial  ramdisk which is automatically loaded into the memory by a boot loader. The ramdisk contains enough modules to mount the root filesystem and to initialize essential devices like keyboard, console or   various expansion cards.  The boot process  then continues with running the init process.

During the next phase, the other modules referenced by the operating system  are loaded automatically.  The modules are called by their aliases specified and set in the /etc/modprobe.conf configuration file. The typical alias is e.g. eth0 for a network interface card or usb-controller for an USB controller.

If we need  to load some specific module during the system boot and there isn't a way to reference it we have a few choices how to do it.
  • Place a particular modprobe command to the /etc/rc.d/rc.local script which is called at the end of the whole boot process. But it is likely to be late at this phase.
  • Or better, place the command in the  /etc/rc.modules file which is read and executed by the /etc/rc.d/rc.sysinit initialization script during the system initialization phase. It may be better to load the modules as soon as possible.
The /etc/rc.modules does not exist by default, so at first create it and make it executable. I think the first method is commonly used by many of us but the second one is in my opinion more systematical.

Tuesday, February 8, 2011

vMA missing libraries

If you are using vMA (vSphere Management Assistant) for some specific management tasks like UPS monitoring  or running a scheduled backup script from cron daemon, you may experience an error similar to this one:
Can't load '/usr/lib/perl5/site_perl/5.8.8/libvmatargetlib_perl.so'
for module vmatargetlib_perl: libtypes.so: cannot open shared object
file: No such file or directory at
/usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DynaLoader.pm line 230.
at /usr/lib/perl5/site_perl/5.8.8/VMware/VmaTargetLib.pm line 10
Compilation failed in require at /usr/lib/perl5/site_perl/5.8.8/VMware/VIFPLib.pm line 10.
A reason for such  behaviour is typically caused by some misunderstandings how shell environment in vMA is configured. The most common mistake is testing the affected script with sudo which strips out some environment variables - especially LD_LIBRARY_PATH - due to some security restrictions. Otherwise, the error shouldn't appear because /etc/bashrc exports vmware SDK library path implicitly:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/vmware/vma/lib64:/opt/vmware/vma/lib
So in  case of sudo or other unspecified scenarios throwing the presented error try to create a wrapper script which explicitly exports a list of directories where to search for ELF libraries again:
#!/bin/bash

LD_LIBRARY_PATH=/opt/vmware/vma/lib64:/opt/vmware/vma/lib  
export LD_LIBRARY_PATH

/path/to/original-script "$@"

exit $?

Tuesday, January 18, 2011

YUM download only mode

How many times I was in a situation I needed to update a server with RHEL installed but I wasn't at site and I didn't have a way how to reboot the server after installing a new kernel or glibc package on it reliably? Yes, I have a test environment and I'm testing the updates on it but many installations are too critical to just run yum update -y and then shutdown -r now. On top of that, there are well known Murphy's laws which are able to damage more than we are able to imagine.

Instead of remote resolution of why the server is suddenly unresponsive I'm trying to prepare some offline update archive (if there isn't an update server available but this is another situation) and then during a site visit to apply it.

As I'm talking about RHEL I'm using YUM or Yellowdog Updater Modified for it. This tool is able to download updates locally without installing them if we have RHEL 5.x system. It only requires to install a download plugin which is part of yum-downloadonly package. Try to install it with

yum install yum-downloadonly

The next lines contain common commands that I use for downloading updates:

yum install PACKAGE_NAME.rpm -y --downloadonly
yum update -y --downloadonly

If we have a RHEL 4.x server we don't have this package and we need to install another package called yum-tools which contains similar tool yumdownloader.

yum install yum-tools -y

Here it is how to use the tool

yumdownloader PACKAGE_NAME.rpm

If we wan't to download all the available updates with yumdownloader we need to get a list of all packages with yum check-update and then to pass it to yumdownloader. You can do it from shell with sed, cut or awk commands or what would you prefer:

for PKG in `yum check-update | cut -d' ' -f1`; do
yumdownloader $PKG
done
For more detailed description of the tools and their parameters have a look at their man pages.

Tuesday, August 31, 2010

Red Hat Enterprise Linux 5.5 - what's new?

It's a few months since RHEL 5.5 was released (march, 2010). Despite this, I would like to point out the major changes and additions compared to the previous release RHEL 5.4. So what's new:
  • Kickstart installation - it is possible to exclude package groups in the same way like single packages.
  • KVM guests and Cluster Suite - management of KVM based virtual guests with Cluster Suite is supported.
  • SPICE - RHEL 5.5 includes components of Simple Protocol for Independent Computing Environments which is competitor for VMware's PCoIP or Citrix's HDX.
  • PCI passthrough - physical PCI devices attached to virtual guests are working better.
  • Huge page support - it is extended to virtual guests with libvirt.
  • Windows 7 support - new samba3x packages supporting Windows 7 are included.
For more details read the RHEL 5.5 official release notes.

Thursday, June 24, 2010

SLES 11 SP1 released

I have decided to write a brief summary of new features and enhancements which are available with the first service pack of Novell's SLES 11. I need to know the major differences between GA and SP1 release during my every day work and perhaps, it will help you in the same way like me. The original post about SLES11 GA is here. So, what's new?
  • it is based on GNU/Linux kernel 2.6.32
  • it provides web based YaST for remote management called WebYaST
  • UEFI booting (useful with disks larger as 2TB) is supported on AMD64/Intel64 now
  • it includes many driver updates (e.g., QLogic/Emulex HBAs, Broadcom NICs)
  • it includes new XEN 4.0
  • KVM hypervisor is now fully supported, good news
  • it fully supports the latest enterprise Intel XEON 5600 and 5700 processors which are able to greatly improve performance of XEN hypervisor by decreasing latency of VMs
  • finally, it contains all the latest security and bug fixes available since the release of GA
It seems there weren't so many changes included in SP1 but I think XEN 4.0 or KVM support from Novell is enough to move our GA installations to SP1. With XEN 4.0, we are able to benefit from these nifty features:
  • fault tolerance
  • memory overcommitment
  • USB paravirtualization and VGA pass-through
  • live snapshots and clones
  • 64 vCPUs per VM
  • 1TB RAM per XEN host
Now, it depends how fast and how successfully Novell will be able to implement and adjust these features into their management tools like YaST.

For more comprehensive contents about SLES11 SP1, check the oficial release notes at www.novell.com.

Tuesday, September 29, 2009

VMware Server 1.0.x library dependency problem

In the beginning of the year, I wrote this article about some problems between older VMware server 1.0.x and newer Linux distributions. The problem is related to the vmware kernel modules whose source code are not compatible with newer Linux kernels.

I was surprised with one thing. When I upgraded VMware Server from version 1.0.8 to 1.0.9, VMware Server console stopped working. The new version was installed on the same system (OpenSUSE 11.1) as the old one, so I don't understand the reason. The important thing is I found a solution. The new version began producing these new error messages after trying to run vmware command:
/usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1: version `GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6)
/usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1: version `GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6)
/usr/lib/vmware/bin/vmware: symbol lookup error: /usr/lib/libgio-2.0.so.0: undefined symbol: g_thread_gettime
I have tried to unset this environment variable influencing behavior of GTK2 applications:
unset GTK2_RC_FILES
Otherwise, the variable is referencing related gtkrc files defining GTK2 user's environment. Try it and I hope it will help.

Wednesday, September 2, 2009

Red Hat Enterprise Linux 5.4 released

Today, it was released a next minor version of Red Hat's flagship Linux distribution RHEL 5.4. Here it is a brief summary of new features and updates:
  • KVM hypervisor - Full support of Kernel-based Virtual Machine is included now. XEN support is included as well, but you can't use both XEN and KVM at the same time. Each hypervisor requires different kernel. You need to have 64b machine to run KVM. It supports RHEL 3/4/5 or Windows XP/2003/2008 as guests.
  • KVM paravirtualized drivers - They are available for Windows XP/2003/2008 in package virtio-win.
  • FUSE support - New version includes modules for Filesystem in Userspace (FUSE) and related utilities. Support for the XFS was added as well. It icnludes updates of CIFS and EXT4 filesystems.
  • Infiniband drivers - It contains some portions of prepared Open Fabrics Enterprise Distribution (OFED) 1.4.1.
New release of RHEL contains many other updates and enhancements which aren't mentioned here. For more details read the RHEL 5.4 official release notes.

Wednesday, August 19, 2009

Linux rc.local script

Sometimes, you need to run some commands during your Linux server startup. And you don't want to waste time with preparing valid init script now. The common task is to load some kernel module or to change speed of network interface and so on.

Red Hat distributions provides for this task rc.local script. You can find it in the directory /etc/rc.d. The script is executed after all the other init scripts. This is ensured with the proper START scripts linking to the /etc/rc.d/rc.local script:

/etc/rc.d/rc2.d/S99local
/etc/rc.d/rc4.d/S99local
/etc/rc.d/rc3.d/S99local
/etc/rc.d/rc5.d/S99local

SUSE distros like SLES or OpenSUSE provide similar mechanism. You have available two scripts. The before.local script should contain everything you want to run before runlevel is entered. The after.local script works like RedHat's rc.local script. It contains stuff which should be executed after runlevel is reached. The scripts don't exist by default, you need to create them at first in the directory /etc/init.d. They don't have to be set executable.

Besides this, the RedHat's rc.local script is executed only in runlevels 2, 3, 4 or 5. It is ignored in the single user mode. SUSE's version of after.local or before.local is interpreted during all runlevels including runlevel 1.

Tuesday, May 19, 2009

RHEL 4.8 released

Yesterday, a next minor version of Red Hat Enterprise Linux 4 was released. The new version 4.8 contains the foloowing updates and enhancements:
  • optimized drivers for RHEL 4 guests running on KVM hypervizor
  • SAMBA update for better interoperability with Windows world
  • new kernel tunables for better performance
For details, there are official release notes published at redhat.com.

Thursday, April 16, 2009

Linux kernel crash dumps with kdump

Kdump is official GNU/Linux kernel crash dumping mechanism. It is part of vanilla kernel. Before it, there exists some projects like LKCD for performing such things. But they weren't part of mainline kernel so you needed to patch the kernel or to rely on Linux distribution to include it. In the event of LKCD, it was difficult to configure it, especially which device to use for dumping.

The first notice about kexec (read what it is useful for and how to use it) in GNU/Linux kernel was in changelog of version 2.6.7. Kexec tool is prerequisite for kdump mechanism. Kdump was firstly mentioned in changelog of version 2.6.13.

How is it working? When the kernel crashed the new so called capture kernel is booted via kexec tool. The memory of previous crashed kernel is leaved intact and the capture kernel is able to capture it. In detail, first kernel needs to reserve some memory for capture kernel. It is used by capture kernel for booting. The consequence is the total system memory is lowered by reserverd memory size.

When the capture kernel is booted, the old memory is captured from the following virtual /proc files:
  • /proc/vmcore - memory content in ELF format
  • /proc/oldmem - really raw memory image!

Next, we will check how to initialize kdump mechanism, how to configure it and how to invoke it for testing purposes.

Tuesday, March 24, 2009

SLES 11 released

Good news for SLES fans. The next major release of the product was released today. Together with SLES 11 was released enterprise-ready desktop SLED 11. Other two new products were announced as well:
  • SUSE Linux Enterprise High Availability Extension - the products integrates clustering filesystem OCFS2, cluster-aware volume manger cLVM2, distributed replicated block device DRBD and Pacemaker Cluster Stack with OpenAIS messaging and member layer. Included DRBD version 8 supports active-active replication.
  • SUSE Linux Enterprise Mono Extension - the product provides open-source cross-platform .NET framework.
What other benefits does SLES 11 bring? As you could see above, it is more modular. Some features were bundled into separate products. The next follow:
  • it is based on GNU/Linux kernel 2.6.27
  • in addition to AppArmor, it is SELinux ready
  • it provides OFED 1.4 (more about it here)
  • package management is based on fast update stack ZYpp
  • SLES 11 is greener - it supports tickless idle which is able to leave CPU in saving stake longer or it provides more granular power profiles
  • it supports swapping over NFS for diskless clients
  • it supports partitioning multiprocessor machine by CPUset System
  • virtualization layer is based on Xen 3.3
  • it is optimised for hypervisors VMware ESX, MS Hyper-V and Xen
  • default filesystem is EXT3
  • it supports kexec, kdump or SystemTap
  • it contains many other enhancements of asynchronous I/O, MPIO, NFS or iSCSI
The official product documentation isn't available yet. The release notes are here.

Thursday, March 12, 2009

Running Linux kexec

The generic form of kexec command looks like
kexec -l kernel_image --initrd=kernel_initrd --append=command_line_options
The command has available many other options but the presented ones are the most important. To start kernel reset, run
kexec -e
How does it work? Linux kernel is placed in memory at defined address offset. On x86 architecture, it begins at 0x100000. Kexec is capable to call and run another kernel in the context of current kernel. It copies the new kernel somewhere into memory, moves it into kernel dynamic memory and finally copies it to the final destination which is the offset and runs it - kernel is exchanged and the reset is performed. An example how to reset running SLES 10.x kernel follows
kversion=`uname -r`
kexec -l /boot/vmlinuz-$kversion --initrd=/boot/initrd-$kversion --append="`cat /proc/cmdline`"
kexec -e
The example for RHEL 5.x is slightly different:
kexec -l /boot/vmlinuz-$kversion --initrd=/boot/initrd-${kversion}.img --append="`cat /proc/cmdline`"

Does it have any drawbacks? As I said, there may be some buggy devices which won't work after kernel reset. Typically, there are troubles with VGAs and their video memory initialization which results in garbled console after reset. The recommendation is to use normal video mode for console. You can change it with vga parameter set to zero and passed as kernel options (e.g. SLES 10 uses video framebuffer by default)
vga=0
Next, the earlier version of kexec had stability issues on any other platform than x86. Today, kexec is uspported on x86, x86_64, ppc64 or ia64.

Tuesday, March 10, 2009

Fast linux reboot with kexec

Kexec is a GNU/Linux kernel feature which allows to perform kernel reboots faster. The time savings around a few minutes are the result of not performing BIOS procedures and hardware reinitialization (each hardware part - like SCSI/FC HBAs - may have own BIOS and POST which takes some amount of time to finish). As we have cold or warm reset we can newly say we have kernel reset.

The GNU/Linux boot process consists of several stages. The hardware stage, firmware stage and bootloader stage are kernel independent and are run in defined order. The hardware stage performs basic hardware tasks such device initialization and testing it. The firmware stage known on PCs as BIOS is in charge of hardware detection. The bootloader can be split into two parts. The first-level bootloader is like master boot record on PCs which calls second-level bootloader which is able to boot Linux kernel. The final stage is kernel stage.

Kexec is smart thing. It can bypass all listed stages up to kernel stage. That means it is able to bypass all the things connected with hardware and jump to the kernel stage directly. The risk is a likely unreliability of untouched devices, typically VGAs or some buggy cards.

What about requirements to try it? The kernel has to be kexec-capable plus you have to have installed kexec tools. It is not problem in today's Linux distributions. Both RHEL 5.x and SLES 10.x contains kexec-tools package which you have to install. Their production kernels are capable of doing kernel resets. On SLES 10, you can check the running kernel configuration for CONFIG_KEXEC variable.
zgrep CONFIG_KEXEC /proc/config.gz


Kexec is controlled with command line program kexec. The command takes defined values for kernel to be booted, its initrd and kernel parameters and starts the kernel reset.

Friday, March 6, 2009

VMware ESX 4.0 aka vSphere 4.0 platform

The next major release of VMware ESX platform is being prepared. The platform newly called as vSphere 4.0 is going to be based on six stones which provide:
  1. vCompute - virtualization layer, hypervisor, live migration
  2. vStorage - storage management, replication
  3. vNetwork - network management, distributed switch, Cisco Nexus switch
  4. Availability - clustering, data protection
  5. Security - VMsafe APIs, vShield Zones
  6. Scalability - dynamic resource management, distributed power management
Furthermore, the new platform will support virtual machines with 8 virtual CPUs and 256 GB of virtual memory.

The second most important part of virtual environment is centralized management. Today, we know it as VMware VirtualCenter Server. In the future, it should be called vCenter Suite. The good news, it will be available for Linux servers as well so no more Windows licenses are required.

Wednesday, March 4, 2009

VCB, vcbMounter, vcbRestore ... updated

I have added another article dedicated to VMware VCB and backups over Samba or Windows shares. Here is updated list of them:
  1. VM identification - how to identify a virtual machine you intend to backup? The command vcbvmname is the answer.
  2. VM full backup - how to perform a full backup of the chosen virtual machine? The vcbmounter command can do it.
  3. VM full backup data access - how to retrieve data from the virtual machine's full backup? It is possible to mount the backup image with the mountvm command.
  4. VM file level backup - the vcbmounter command is able to perform file-level backup as well.
  5. VM backup over NFS - this article describes a simple scenario of virtual machine backup over NFS protocol.
  6. VM backup restore - it is important to know the process of restoring a virtual machine from the backup. You can use vcbrestore.
  7. VM backup with Samba or Windows share - the other approach how to perform backups of virtual machines is to use Samba or Windows shares instead of NFS server.

Friday, February 27, 2009

VCB basic usage - VM full backup with Samba

In the previous article about VMware VCB, I wrote about full backups to NFS shares. For completeness, I decided to write another one dedicated to backups to Samba or Windows shares.

The idea of backup is the same. Let's have a Samba server available at IP address 192.168.1.1. The exported directory for backups is backup-smb and the user which has write access to this share is backup.

Before we will be able to continue we need to allow smbclient to access Samba server. You can perform it from VI client or directly from ESX service console via esxcfg-firewall command. First, let's check if smbclient is allowed:
esxcfg-firewall -q smbClient
The output of command should be by default:
Service smbClient is blocked.
To reconfigure ESX firewall to allow smbclient access use the next command:
esxcfg-firewall -e smbClient
Now, you should be able to browse the server (the command asks for user's password first):
 smbclient  -L 192.168.1.1 -U backup
The example command output follows (Samba server on SLES10):
Domain=[NAS] OS=[Unix] Server=[Samba 3.0.28-0.2-1625-SUSE-CODE10]
Sharename       Type      Comment
--------- ---- -------
profiles Disk Network Profiles Service
backup-smb Disk
IPC$ IPC IPC Service (Samba 3.0.28-0.2-1625-SUSE-CODE10)
Domain=[NAS] OS=[Unix] Server=[Samba 3.0.28-0.2-1625-SUSE-CODE10]
Now, we are ready to create a simple backup script:
#!/bin/sh

BACKUP_SERVER="192.168.1.1"
BACKUP_USER="backup"
BACKUP_PASS="backup"
SMB_SHARE="backup-smb"
MOUNT_DIR="/backup"

[ -d $MOUNT_DIR ] || mkdir -p "$MOUNT_DIR" || exit 1

VM_BACKUP="`vcbVmName -s any: | grep name: | cut -d':' -f2`"

if [ ! -z "$VM_BACKUP" ]; then
smbmount //${BACKUP_SERVER}/$SMB_SHARE $MOUNT_DIR \
-o username=${BACKUP_USER},password=$BACKUP_PASS || exit 1

for VM in $VM_BACKUP; do
vcbMounter -a name:$VM -r $MOUNT_DIR/$VM
done

umount $MOUNT_DIR
fi

exit 0
It is simple, isn't it? The code is almost the same as for backups over NFS. We added variables defining our Samba user and his password. The mount command was exchanged with smbmount which is CLI Samba client. If you insist on using the mount command replace the line mounting the backup-smb share with line:
mount -t smbfs //${BACKUP_SERVER}/$SMB_SHARE $MOUNT_DIR \
-o username=${BACKUP_USER},password=$BACKUP_PASS || exit 1
That's all. In such simple backup scenarios I prefer NFS usage because it is simple to set and provides higher throughput than SMB protocol. On the other hand, SMB protocol provides basic authentication mechanism (if you don't disable it).

Wednesday, February 18, 2009

VMware vCenter Converter 4.0 was released

The previous version of Converter was at 3.0.3 for a long time. The new standalone version is much similar to the one included in Virtual Infrastructure 3.5 (VI 3.5).

Before it, there were available two editions - Starter and Enterprise where the second one is part of VI 3.5. Here are the additional features of Enterprise edition compared to Starter:
  • it supports multiple migration jobs
  • it supports cold migration
  • it is part of VI3.5 only (particularly VirtualCenter server)
What new brings us latest revision? It is free of charge, it has larger set of supported operating systems as source or it allows you to select the target virtual disks. Newly, it can migrate sources with RedHat, SUSE or UBUNTU Linux. Furthermore, it is able to power off the source after migration finishes. The more comprehensive comparison of the version 4.0 and the version included in VI3.5 is presented by this picture.

Tuesday, February 3, 2009

Licensing open source

I was considering to write this article a while because it doesn't fit in any type of article I have published before. And it isn't my primary business to discuss various open source licensing here. The thing is, it is useful to understand the role of them but it is often quite difficult to imagine what they just want to say. Sometimes, I have a feeling you need a lawyer education to understand them.

You know the obvious questions like "why it has to be GPLed?", "why this license is not compatible with that one?" or "why it can't be part of Linux kernel?". You know the open source license ensures the availability of source code which you can modify and redistribute. The true pitfalls begin appearing when you would like to integrate two products available under two different licenses. To make things clearer I borrowed these two comprehensive schemes from chandanlog at Sun blogs. The first one presents general attitude of open source licenses and classical EULA to source code. The second one explains differences of open source licenses. They are quite minor but may have out of sight consequences.
Let's try to apply the licensing rules to the problem of releasing ZFS filesystem with Linux kernel. What's the problem? First, Sun owns some patent rights which prohibit such action. Second, as Linux kernel is GPLized anything included has to be GPLized as well. ZFS is covered with CDDL license which requires to be preserved. From here, I see the main reason of incompatibility. But if I realize there are other binary only modules like video drivers from ATI or NVIDIA which are linked with kernel via some sort of GPLized open source wrapper why we can't do it the same way with ZFS?!? The question is if it is legal.

The two practical schemes makes me to understand the topic more deeply. The example with ZFS made the situation complicated and I need to find out something what shows me that it is not. I hope you will find these graphical explanations as useful as mine. And check the chandanlog who created them!

Wednesday, January 28, 2009

SLES 10 update - Service Pack 2

Our final step is to move the system from SP1 to SP2. I have to mention the choice you can use zypper tool from now. The command syntax and parameters are almost the same. Nevertheless, I'm going to continue with rug.

It holds the same for SLES 10 SP2. It is separated product with separated update source tree. Let's begin.
  1. Again, subscribe to SP2 update and install source and to available catalogs
    rug sa ftp://suse.mydom.com/update/SLES10-SP2-Updates/sles-10-i586 --type yum update-sp2
    rug sa ftp://suse.mydom.com/update/SLES10-SP2-Online/sles-10-i586 --type yum install-sp2
    rug sl

    # | Status | Type | Name | URI
    --+--------+------+------------+------------------------------------------------
    1 | Active | ZYPP | online | ftp://suse.mydom.com/install/i386/sles10
    2 | Active | YUM | update | ftp://suse.mydom.com/update/SLES10-Updates...
    3 | Active | YUM | update-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Updates...
    4 | Active | YUM | online-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Online...
    5 | Active | YUM | update-sp2 | ftp://suse.mydom.com/update/SLES10-SP2-Updates...
    6 | Active | YUM | online-sp2 | ftp://suse.mydom.com/update/SLES10-SP2-Online...


    rug sub online-sp2
    rug sub update-sp2
    rug ca

    Sub'd? | Name | Service
    -------+------------+-----------
    Yes | online | online
    Yes | update | update
    Yes | update-sp1 | update-sp1
    Yes | online-sp1 | online-sp1
    Yes | update-sp1 | update-sp2
    Yes | online-sp1 | online-sp2
  2. Perform update
    rug up -y

    Resolving Dependencies...

    The following packages will be installed:
    aaa_base 10-12.47 (ftp://suse.mydom.com/update/SLES10-SP2-Updates/sles-10-i586)
    ...
    ...
    Downloading Packages...
    ...
    Transaction...
    ...
    Finishing...
    Transaction Finished
  3. Move to SP2 product
    rug in -y -t patch product-sles10-sp2

  4. Verify the new version
    SPident

    CONCLUSION: System is up-to-date!
    found SLE-10-i386-SP2 + "online updates"
If you are not using local update server but official ones like nu.novell.com you can still follow the same steps. It is quite simpler because you don't have to add new update and install sources by hand but just use switch-update-server and move-to-sles10-sp1 or move-to-sles10-sp2 patches which prepare the current system for transition from GA to SP1 and from SP1 to SP2.
  1. Before you start update install switch-update-server patch and prepare the system
    rug in -y -t patch switch-update-server
    /usr/bin/switch-update-server

    rug sub SLES10-Updates
    rug in -y -t patch move-to-sles10-sp1
  2. Perform the similar steps for SP2
  3. Continue with update the same way as shown in the article
Perhaps, it would be interesting to compare whole process with updating other enterprise distribution like RHEL now. How difficult it is and so on.

In the end, I would like to mention the main source of information the article is based on. The official documentation for update from SLES 10 GA to SP1 and from SP1 to SP2 is published at www.novell.com:
  1. How to update to SLES/SLED 10 SP1
  2. How to update to SLES/SLED 10 SP2