Showing posts with label suse. Show all posts
Showing posts with label suse. Show all posts

Thursday, June 24, 2010

SLES 11 SP1 released

I have decided to write a brief summary of new features and enhancements which are available with the first service pack of Novell's SLES 11. I need to know the major differences between GA and SP1 release during my every day work and perhaps, it will help you in the same way like me. The original post about SLES11 GA is here. So, what's new?
  • it is based on GNU/Linux kernel 2.6.32
  • it provides web based YaST for remote management called WebYaST
  • UEFI booting (useful with disks larger as 2TB) is supported on AMD64/Intel64 now
  • it includes many driver updates (e.g., QLogic/Emulex HBAs, Broadcom NICs)
  • it includes new XEN 4.0
  • KVM hypervisor is now fully supported, good news
  • it fully supports the latest enterprise Intel XEON 5600 and 5700 processors which are able to greatly improve performance of XEN hypervisor by decreasing latency of VMs
  • finally, it contains all the latest security and bug fixes available since the release of GA
It seems there weren't so many changes included in SP1 but I think XEN 4.0 or KVM support from Novell is enough to move our GA installations to SP1. With XEN 4.0, we are able to benefit from these nifty features:
  • fault tolerance
  • memory overcommitment
  • USB paravirtualization and VGA pass-through
  • live snapshots and clones
  • 64 vCPUs per VM
  • 1TB RAM per XEN host
Now, it depends how fast and how successfully Novell will be able to implement and adjust these features into their management tools like YaST.

For more comprehensive contents about SLES11 SP1, check the oficial release notes at www.novell.com.

Tuesday, September 29, 2009

VMware Server 1.0.x library dependency problem

In the beginning of the year, I wrote this article about some problems between older VMware server 1.0.x and newer Linux distributions. The problem is related to the vmware kernel modules whose source code are not compatible with newer Linux kernels.

I was surprised with one thing. When I upgraded VMware Server from version 1.0.8 to 1.0.9, VMware Server console stopped working. The new version was installed on the same system (OpenSUSE 11.1) as the old one, so I don't understand the reason. The important thing is I found a solution. The new version began producing these new error messages after trying to run vmware command:
/usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1: version `GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6)
/usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1: version `GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6)
/usr/lib/vmware/bin/vmware: symbol lookup error: /usr/lib/libgio-2.0.so.0: undefined symbol: g_thread_gettime
I have tried to unset this environment variable influencing behavior of GTK2 applications:
unset GTK2_RC_FILES
Otherwise, the variable is referencing related gtkrc files defining GTK2 user's environment. Try it and I hope it will help.

Wednesday, August 19, 2009

Linux rc.local script

Sometimes, you need to run some commands during your Linux server startup. And you don't want to waste time with preparing valid init script now. The common task is to load some kernel module or to change speed of network interface and so on.

Red Hat distributions provides for this task rc.local script. You can find it in the directory /etc/rc.d. The script is executed after all the other init scripts. This is ensured with the proper START scripts linking to the /etc/rc.d/rc.local script:

/etc/rc.d/rc2.d/S99local
/etc/rc.d/rc4.d/S99local
/etc/rc.d/rc3.d/S99local
/etc/rc.d/rc5.d/S99local

SUSE distros like SLES or OpenSUSE provide similar mechanism. You have available two scripts. The before.local script should contain everything you want to run before runlevel is entered. The after.local script works like RedHat's rc.local script. It contains stuff which should be executed after runlevel is reached. The scripts don't exist by default, you need to create them at first in the directory /etc/init.d. They don't have to be set executable.

Besides this, the RedHat's rc.local script is executed only in runlevels 2, 3, 4 or 5. It is ignored in the single user mode. SUSE's version of after.local or before.local is interpreted during all runlevels including runlevel 1.

Tuesday, March 24, 2009

SLES 11 released

Good news for SLES fans. The next major release of the product was released today. Together with SLES 11 was released enterprise-ready desktop SLED 11. Other two new products were announced as well:
  • SUSE Linux Enterprise High Availability Extension - the products integrates clustering filesystem OCFS2, cluster-aware volume manger cLVM2, distributed replicated block device DRBD and Pacemaker Cluster Stack with OpenAIS messaging and member layer. Included DRBD version 8 supports active-active replication.
  • SUSE Linux Enterprise Mono Extension - the product provides open-source cross-platform .NET framework.
What other benefits does SLES 11 bring? As you could see above, it is more modular. Some features were bundled into separate products. The next follow:
  • it is based on GNU/Linux kernel 2.6.27
  • in addition to AppArmor, it is SELinux ready
  • it provides OFED 1.4 (more about it here)
  • package management is based on fast update stack ZYpp
  • SLES 11 is greener - it supports tickless idle which is able to leave CPU in saving stake longer or it provides more granular power profiles
  • it supports swapping over NFS for diskless clients
  • it supports partitioning multiprocessor machine by CPUset System
  • virtualization layer is based on Xen 3.3
  • it is optimised for hypervisors VMware ESX, MS Hyper-V and Xen
  • default filesystem is EXT3
  • it supports kexec, kdump or SystemTap
  • it contains many other enhancements of asynchronous I/O, MPIO, NFS or iSCSI
The official product documentation isn't available yet. The release notes are here.

Wednesday, January 28, 2009

SLES 10 update - Service Pack 2

Our final step is to move the system from SP1 to SP2. I have to mention the choice you can use zypper tool from now. The command syntax and parameters are almost the same. Nevertheless, I'm going to continue with rug.

It holds the same for SLES 10 SP2. It is separated product with separated update source tree. Let's begin.
  1. Again, subscribe to SP2 update and install source and to available catalogs
    rug sa ftp://suse.mydom.com/update/SLES10-SP2-Updates/sles-10-i586 --type yum update-sp2
    rug sa ftp://suse.mydom.com/update/SLES10-SP2-Online/sles-10-i586 --type yum install-sp2
    rug sl

    # | Status | Type | Name | URI
    --+--------+------+------------+------------------------------------------------
    1 | Active | ZYPP | online | ftp://suse.mydom.com/install/i386/sles10
    2 | Active | YUM | update | ftp://suse.mydom.com/update/SLES10-Updates...
    3 | Active | YUM | update-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Updates...
    4 | Active | YUM | online-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Online...
    5 | Active | YUM | update-sp2 | ftp://suse.mydom.com/update/SLES10-SP2-Updates...
    6 | Active | YUM | online-sp2 | ftp://suse.mydom.com/update/SLES10-SP2-Online...


    rug sub online-sp2
    rug sub update-sp2
    rug ca

    Sub'd? | Name | Service
    -------+------------+-----------
    Yes | online | online
    Yes | update | update
    Yes | update-sp1 | update-sp1
    Yes | online-sp1 | online-sp1
    Yes | update-sp1 | update-sp2
    Yes | online-sp1 | online-sp2
  2. Perform update
    rug up -y

    Resolving Dependencies...

    The following packages will be installed:
    aaa_base 10-12.47 (ftp://suse.mydom.com/update/SLES10-SP2-Updates/sles-10-i586)
    ...
    ...
    Downloading Packages...
    ...
    Transaction...
    ...
    Finishing...
    Transaction Finished
  3. Move to SP2 product
    rug in -y -t patch product-sles10-sp2

  4. Verify the new version
    SPident

    CONCLUSION: System is up-to-date!
    found SLE-10-i386-SP2 + "online updates"
If you are not using local update server but official ones like nu.novell.com you can still follow the same steps. It is quite simpler because you don't have to add new update and install sources by hand but just use switch-update-server and move-to-sles10-sp1 or move-to-sles10-sp2 patches which prepare the current system for transition from GA to SP1 and from SP1 to SP2.
  1. Before you start update install switch-update-server patch and prepare the system
    rug in -y -t patch switch-update-server
    /usr/bin/switch-update-server

    rug sub SLES10-Updates
    rug in -y -t patch move-to-sles10-sp1
  2. Perform the similar steps for SP2
  3. Continue with update the same way as shown in the article
Perhaps, it would be interesting to compare whole process with updating other enterprise distribution like RHEL now. How difficult it is and so on.

In the end, I would like to mention the main source of information the article is based on. The official documentation for update from SLES 10 GA to SP1 and from SP1 to SP2 is published at www.novell.com:
  1. How to update to SLES/SLED 10 SP1
  2. How to update to SLES/SLED 10 SP2

Tuesday, January 20, 2009

SLES 10 update - Service Pack 1

Keep in mind, SLES 10 GA and SLES 10 SP1 are treated as separated products. We need to subscribe to new installation and update sources and repeat the previous steps with some little additions. My installation source of SLES 10 SP1 is part of update server. The update server is synchronized from official Novell update server with YUP proxy.
  1. Subscribe to SLES 10 SP1 installation and update source, then subscribe to catalogs
    rug sa ftp://suse.mydom.com/update/SLES10-SP1-Updates/sles-10-i586 --type yum update-sp1
    rug sa ftp://suse.mydom.com/update/SLES10-SP1-Online/sles-10-i586 --type yum install-sp1
    rug sl

    # | Status | Type | Name | URI
    --+--------+------+------------+------------------------------------------------
    1 | Active | ZYPP | online | ftp://suse.mydom.com/install/i386/sles10
    2 | Active | YUM | update | ftp://suse.mydom.com/update/SLES10-Updates...
    3 | Active | YUM | update-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Updates...
    4 | Active | YUM | online-sp1 | ftp://suse.mydom.com/update/SLES10-SP1-Online...

    rug sub online-sp1
    rug sub update-sp1
    rug ca

    Sub'd? | Name | Service
    -------+------------+-----------
    Yes | online | online
    Yes | update | update
    Yes | update-sp1 | update-sp1
    Yes | online-sp1 | online-sp1


  2. First, install required Zenworks management Agent patch otherwise there is a risk rug won't be working properly
    rug in -y -t patch slesp1o-liby2util-devel
  3. Restart zmd service and perform update
    rczmd restart
    rug up -y

    Resolving Dependencies...

    The following packages will be installed:
    aaa_base 10-12.33.3 (ftp://suse.mydom.com/update/SLES10-SP1-Online/sles-10-i586)
    ...
    Downloading Packages...
    ...
    Transaction...
    ...
    Finishing...
    Transaction Finished
  4. Finally, move the system to SP1 version
    rug in -y -t patch product-sles10-sp1
  5. Check the system version
    SPident

    CONCLUSION: System is up-to-date!
    found SLE-10-i386-SP1 + "online updates"
The result is we have system with SLES 10 SP1 and all required updates applied. It is recommended to reboot such system to apply all included changes (especially new kernel).

Wednesday, January 14, 2009

SLES 10 update - GA update

Among basic administration tasks of Linux system administration belongs its regular update. Each distribution has its own way how to perform it. Update of SLES 10 is not as straightforward as many of us expect so I decided to make a summary of this procedure.

I'll be doing it with rug command, not via graphical YaST. From SLES 10 SP1 you can use zypper command which is much faster than rug and it is fully independent of Novell Zenworks Linux Management Agent. If you don't use Novell Zenworks for managing your Linux systems then you can afford to disable zmd service and to use zypper tool only. To make the update more faster I'll be using local update server at URL ftp://suse.mydom.com (you can deploy your own with YUP - Yum Update Proxy).

Let's begin with initial installation of SLES 10 GA (particularly i386 platform but it's the same for the others).
  1. First, identify the current system
    SPident

    CONCLUSION: System is up-to-date!
    found SLES-10-i386-current
  2. Subscribe to SLES 10 GA installation source (it may be required for dependencies during update)
    rug sa ftp://suse.mydom.com/install/i386/sles10 --type zypp online
  3. Subscribe to SLES 10 update source
    rug sa ftp://suse.mydom.com/update/SLES10-Updates/sles-10-i586 --type yum update
  4. Check subscribtions
    rug sl

    # | Status | Type | Name | URI
    --+--------+------+--------+----------------------------------------------------
    1 | Active | ZYPP | online | ftp://suse.mydom.com/install/i386/sles10
    2 | Active | YUM | update | ftp://suse.mydom.com/update/SLES10-Updates...
  5. Check available catalogs
    rug ca

    Sub'd? | Name | Service
    -------+--------+--------
    | update | update
    | online | online
  6. Subscribe to the catalogs and check them
    rug sub update
    rug sub online
    rug ca

    Sub'd? | Name | Service
    -------+------------+-----------
    Yes | update | update
    Yes | online | online
  7. Update SLES 10 GA system
    rug up -y

    Resolving Dependencies...

    The following packages will be installed:
    apache2 2.2.3-16.2 (ftp://suse.mydom.com/update/SLES10-Updates/sles-10-i586)
    ...
    ...
    Downloading Packages...
    ...
    Transaction...
    ...
    Finishing...
    Transaction Finished
  8. Check the version after update finished successfully
    SPident

    CONCLUSION: System is up-to-date!
    found SLES-10-i386-current + "online updates"
Now, our SLES 10 system is ready for transition to Service Pack 1. Reboot the system before we proceed.

Thursday, January 8, 2009

VMware Server 1.0.8 on openSUSE 11.1

I decided to upgrade my laptop system from almost "prehistoric" openSuSE 10.1 to the newest version 11.1. It was quite successful but I had to resolve an issue with VMware Server 1.0.8 which I am used to using in my work a lot.

The whole configuration process crashed on vmware kernel modules compilation. The kernel version in new openSUSE is 2.6.27.7. As there aren't precompiled modules for it within version 1.0.8 they need to be recompiled at first. Don't forget to have installed kernel-source, make, gcc and patch packages. Secondly, you need to configure installed kernel sources with make cloneconfig to correspond with the running kernel and platform. Finally, configure VMware Server installation. Everything follows here:
zypper in -y kernel-source make gcc patch
cd /usr/src/linux
make mrproper; make cloneconfig
vmware-config.pl
But the last command produces these errors:
Building the vmmon module.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config2/vmmon-only'
make -C /lib/modules/2.6.27.7-9-pae/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-2.6.27.7-9-obj/i386/pae'
make -C ../../../linux-2.6.27.7-9 O=/usr/src/linux-2.6.27.7-9-obj/i386/pae/. modules
CC [M] /tmp/vmware-config2/vmmon-only/linux/driver.o
In file included from /tmp/vmware-config2/vmmon-only/./include/x86.h:20,
from /tmp/vmware-config2/vmmon-only/./include/machine.h:24,
from /tmp/vmware-config2/vmmon-only/linux/driver.h:15,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:49:
/tmp/vmware-config2/vmmon-only/./include/x86apic.h:79:1: warning: "APIC_BASE_MSR" redefined
In file included from include2/asm/fixmap_32.h:29,
from include2/asm/fixmap.h:5,
from include2/asm/apic.h:9,
from include2/asm/smp.h:13,
from /usr/src/linux-2.6.27.7-9/include/linux/smp.h:28,
from /usr/src/linux-2.6.27.7-9/include/linux/topology.h:33,
from /usr/src/linux-2.6.27.7-9/include/linux/mmzone.h:687,
from /usr/src/linux-2.6.27.7-9/include/linux/gfp.h:4,
from /usr/src/linux-2.6.27.7-9/include/linux/kmod.h:22,
from /usr/src/linux-2.6.27.7-9/include/linux/module.h:13,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:12:
include2/asm/apicdef.h:134:1: warning: this is the location of the previous definition
In file included from /tmp/vmware-config2/vmmon-only/./include/machine.h:24,
from /tmp/vmware-config2/vmmon-only/linux/driver.h:15,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:49:
/tmp/vmware-config2/vmmon-only/./include/x86.h:830:1: warning: "PTE_PFN_MASK" redefined
In file included from include2/asm/paravirt.h:7,
from include2/asm/irqflags.h:55,
from /usr/src/linux-2.6.27.7-9/include/linux/irqflags.h:57,
from include2/asm/system.h:11,
from include2/asm/processor.h:17,
from /usr/src/linux-2.6.27.7-9/include/linux/prefetch.h:14,
from /usr/src/linux-2.6.27.7-9/include/linux/list.h:6,
from /usr/src/linux-2.6.27.7-9/include/linux/module.h:9,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:12:
include2/asm/page.h:22:1: warning: this is the location of the previous definition
In file included from /tmp/vmware-config2/vmmon-only/linux/vmhost.h:13,
from /tmp/vmware-config2/vmmon-only/linux/driver.c:71:
/tmp/vmware-config2/vmmon-only/./include/compat_semaphore.h:5:27: error: asm/semaphore.h: No such file or directory
/tmp/vmware-config2/vmmon-only/linux/driver.c:146: error: unknown field 'nopage' specified in initializer
/tmp/vmware-config2/vmmon-only/linux/driver.c:147: warning: initialization from incompatible pointer type
/tmp/vmware-config2/vmmon-only/linux/driver.c:150: error: unknown field 'nopage' specified in initializer
/tmp/vmware-config2/vmmon-only/linux/driver.c:151: warning: initialization from incompatible pointer type
/tmp/vmware-config2/vmmon-only/linux/driver.c: In function 'LinuxDriver_Ioctl':
/tmp/vmware-config2/vmmon-only/linux/driver.c:1670: error: too many arguments to function 'smp_call_function'
make[4]: *** [/tmp/vmware-config2/vmmon-only/linux/driver.o] Error 1
make[3]: *** [_module_/tmp/vmware-config2/vmmon-only] Error 2
make[2]: *** [sub-make] Error 2
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/linux-2.6.27.7-9-obj/i386/pae'
make: *** [vmmon.ko] Error 2
make: Leaving directory `/tmp/vmware-config2/vmmon-only'
Unable to build the vmmon module.
The compilation of vmmon module crashed because of incompatibility between the kernel version and available vmmon module. The solution is to download updated version of modules vmware-update-2.6.27-5.5.7-2 and update them:
wget http://www.insecure.ws/warehouse/vmware-update-2.6.27-5.5.7-2.tar.gz
tar zxfv vmware-update-2.6.27-5.5.7-2.tar.gz
cd vmware-update-2.6.27-5.5.7-2
./runme.pl
This update updates all required modules and configuration script vmware-config.pl. After that, the compilation of vmmon module is successful and you can finish the configuration. I hope it will help you.

Tuesday, October 7, 2008

SLES10 update and SSL certificate problem

Have you ever needed to update some remote SLES10 system from your local update server (e.g. YUP server)? There may be many reasons for such situation. For example, the remote system can have unstable Internet connectivity to connect to the Novell servers or no connectivity at all with ability to see your local update server via VPN network only. You are able to imagine other situations, of course.

Let's suppose our update server is reachable from the remote locality via HTTPS protocol at URL https://update.domain.tld/path/. The update source is of YUM type and we want to update the system with the zypper command. At first, we need to subscribe to the update server. If the update server SSL certificate is subscribed by some well-known certification authority, then you don't have to worry. You can use the following command to add the update server to the update sources:
zypper subscribe https://update.domain.tld/path/update update
But if you generated your own certification authority or self-subscribed server certificate, then you may notice these errors:
Curl error for 'https://update.domain.tld/path/repodata/repomd.xml':
Error code:
Error message: SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
The message is comprehensible and it says that the server certificate is untrusted and can't be verified by the known CA certificates. Simply said, the server certificate is subscribed by your untrusted certificate or it is self-signed. The message only warns you that there may be an attempt of man in the middle attack.

The curl application uses a CA bundle to verify server certificates. The bundle is typically stored in the /usr/share/curl/curl-ca-bundle.crt file. If you want to make your own CA certificate valid, then concat its PEM content to the end of the file like this:
cat ca.crt >> /usr/share/curl/curl-ca-bundle.crt
After this command, everything will begin to work and the update server URL will be added to the update sources.Then, the update may start:
zypper update
I didn't mention that you will have a similar problem if you use the rug command. If I apply the previous steps the rug command will produce an error about SSL certificate verification failure anyway. I suspect that rug doesn't use curl to access the update server. So, does anybody know how to resolve it in case of rug usage?

Wednesday, September 3, 2008

VMware server 1.x and GNOME library issue

If you install VMware server 1.x at your Linux workstation you may encounter the dependency issue between installed VMware libraries and available system libraries like this (lines are broken):
(vmware:30311): libgnomevfs-WARNING **:
Cannot load module `/opt/gnome/lib/gnome-vfs-2.0/modules/libfile.so'
(/usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1:
version `GCC_4.2.0' not found (required by /usr/lib/libstdc++.so.6))

The above message can be initiated via adding a new virtual disk to the virtual machine or via assigning an ISO image to its virtual CD-ROM. Such operations end with the error displayed in the parent console. The reason why such situation maybe appear is that the installed libraries are compiled with an older GCC compiler than the system libraries. The above error was produced at SLES 10 SP1 distribution which includes GCC 4.1.2. The installed VMware server had version 1.0.6 and in my opinion, it is compiled with GCC 3.x.

The resolution for the problem is to set environment variable VMWARE_USE_SHIPPED_GTK to value "yes", export it and run vmware command after that:
  1. VMWARE_USE_SHIPPED_GTK=yes
  2. export VMWARE_USE_SHIPPED_GTK
  3. vmware &
I recommend to place the variable to your startup script, e.g. to your ~/.profile or ~/.bash_profile.

Thursday, May 22, 2008

New RHEL 5.2, new SLES 10 sp2

Wow, Red Hat company released today second update of their RHEL 5. Read the release notes and Red Hat NEWS. The most important enhancements are related to virtualization and Infiniband technology. RHEL 5.2 contains OFED in version 1.3 now. More about it in my previous article.

Further, Novell released the second service pack for their SLES 10. It is written about it at Novell NEWS. And what's new here? The second service pack is delivering XEN in version 3.2., updated Heartbeat2 or OCFS2 filesystem. More about enhancements is here. Official release notes are available as well.

So, let's go to download and test the products.

Tuesday, May 20, 2008

SLES life cycle explaination

SUSE Linux Enterprise Server or shortly SLES is a Linux distribution widely deployed by many enterprise environments. The first version SLES 8 was released in April 2001. The successor SLES 9 was released three years later in 2004. And the latest available revision SLES 10 was released in the middle of 2006. The prepared SLES 11 is planned for the current year 2008.

Let's suppose we were using SLES since its starting point. We began to install SLES 8 on our servers. Then we upgraded some to SLES 9. And after all, we preinstalled a few new servers with SLES 10. So we began in 2001 and ended in 2008! Now, have you ever thought about your server lifetime? Or have you ever realised that your older installations would have to be migrated to the most recent release of SLES due to its maintenance and support cutoff?

The product support life cycle was more precisely defined in 2005 after acquisition of SUSE company by Novell. Perfect! With help of this, we are able to figure out how long will be our installations supported. I think it's interesting because Novell split the life cycle into three stages:
  1. general support (gs) - bug fixes, security patches and enhancements of the product are available, a service packs may be released and a customer may request an installation or configuration support. It is provided for five years at least.
  2. extended support (es) - only the most critical bugs are fixed if the bug is considered appropriate and strategic, security patches are always available. It is provided for two years at least and after gs end.
  3. self-support (ss) - like knowledge base or discussion forums are available during the all stages until the product end and minimally for 10 years.
We don't have to forget that self-support is free of charge while the general and extended support is typically available for a fee. All in all, we have defined a minimum time for which Novell will take care of the product. If you want it is possible to find out more accurate periods here. And more about software maintenance is written here and here.

Now, let's try to apply the life cycle policies to the SLES releases and compare the results with official data from Novell:
  1. SLES 8 - released in 2001, gs ends in 2006, es in 2008 and ss in 2011
    official dates: gs ends in 2007 , es is not offered and ss in 2012
  2. SLES 9 - released in 2004, gs ends in 2009, es in 2011 and ss in 2014
    official dates: gs ends in 2009 , es in 2011 and ss in 2014
  3. SLES 10 - released in 2006, gs ends in 2011, es in 2013 and ss in 2016
    official dates: gs ends in 2011, es in 2013 and ss in 2016
The only difference is for SLES 8 and it is caused by the acquisition in 2003 and redefinition of support policies. I hope the article will help you with proper planning of lifecycle of your servers based on SLES. It is difficult to decide when to move to newer release but it is really important to do it. Otherwise you risk your servers will stay unsupported and the chance to abuse them will grow. And that's not the goal.

Wednesday, April 23, 2008

MySQL and PostgreSQL server upgrade pitfalls - I

We are hosting a few applications on our web hosting server which depend on PostgreSQL and MySQL databases. Unfortunately, some conditions caused that we need to upgrade the server from SLES9 installation to newer SLES10.

Let's compare the SLES10 distro and its predecessor according to databases they contain. The SLES10 contains MySQL server in version 5.0.x while the SLES9 is based on version.4.0.x. By the way, the MySQL 5.0.x was one of the requirements that lead us to the upgrade decision. Further, the SLES10 contains PostgreSQL server in version 8.1.x while the SLES9 contains version 7.4.x.

So, I needed to move data and related metadata from the old installation to the new one. Of course, I decided to test the scenario with data from backup. What problems did I have to solve? At first, the schema of administration database mysql which stores user privileges and so on was changed. There were added some new fields in version 5, e.g. in the table user were added fields like , Show_view_priv and so on, which aren't available in previous versions. So, simple backup command like the following one couldn't be working:

mysqldump --all-databases | gzip > /backup/mysql.gz

The reason why the script wasn't working is that it is using not complete INSERT statements like this:

INSERT INTO user VALUES ('localhost', 'test', 'test', ...);

Such a statement is trying to insert 31 values but the new schema requires the user table to have 37! So the statement will finish with the error:

#1136 - Column count doesn't match value count at row 1

To bypass such behaviour I had to fix the backup command to use complete INSERT:

mysqldump -c --all-databases | gzip > /backup/mysql.gz


After the change, there was added only -c parameter, the mysql dump containts complete INSERT statements:

INSERT INTO user (Host, User, ...) VALUES ('localhost', 'test', ...);

Such a statement is accepted by MySQL 5.0.x as well because there are explicitly defined which fields to insert. The remaining fields will stay empty or they will be assigned default values.

I think it's quite useful to remember on possible backward incompatibility between versions and to check if related tools, e.g. for database backup, provide a way how to deal with it.

The pretty much similar thing I had to solve with PostgreSQL. But it was worse because there was a problem with user data and the syntax of adequate COPY/INSERT statements. I will write about it next time.

Tuesday, February 12, 2008

SLES10 SP1 and pam session group errors

I was configuring a new backup server of our customer and I wanted to integrate it to the running LDAP infustructure. So I intented to configure it as a LDAP client and join it to the customer's LDAP server.

The backup server is based on SLES10 SP1 distribution and for such basic configuration tasks is equipped with YaST configurator. I can only recommend it if you don't want to waste time with simplicities! To configure a server as a LDAP client is really straightforward and you don't need to edit any config files like /etc/ldap.conf, /etc/nssswitch.conf or PAM config files and to remember exactly what to write where. Just fill in the proper options like address of LDAP server, LDAP base DN, if to use SSL/TLS, LDAP protocol version and confirm it. The screenshot ilustrates these options.


But sometimes troubles happen. After finishing the above process, remotely of course, everything seems to work as I expected. I was able to see LDAP users what was the main goal. Perfect!

The first problem which I noticed was that I wasn't able to connect to the server remotely via ssh once more. Debbuging of the connection didn't helped me. Why?!? Good question!

I had to inspect the server locally and I found the following errors in messages:
  • sshd[8351]: Accepted publickey for root from A.B.C.D port 59203 ssh2
  • sshd[8353]: pam_warn(sshd:session): function=[pam_sm_open_session] service=[sshd] terminal=[/dev/pts/0] user=[xxx] ruser=[] rhost=[yyy]
  • error: PAM: pam_open_session(): Cannot make/remove an entry for the specified session
The errors are related to the sshd service and they were the result of unsuccesful connection. Another malfunctioned service was crond service and its errors were identical.

It is clear that something is wrong with PAM configuration of the services. In SLES10 and others distros the PAM modules are used for authentication, account and session processing of the most services. This behaviour of sshd daemon is affected with one option in the /etc/ssh/sshd_config config file:
  • UsePAM yes
So I decided to try turning it off to make me sure I'm going in the right direction. The sshd service started working again but I wasn't still sure what's wrong with it. What should I do with the crond service to bypass the PAM modules?

I realized that the only thing I had changed before was the LDAP client configuration. I tried to bring the system to the previous state without it but it didn't helped. That means that when I had configured the LDAP client with YaST some operations weren't successful. Unfortunately, as I mentioned the LDAP client configuration is straightforward and you need to change only a few config files. Of course, you need to have installed the required packages with binaries and libraries.

I took a look over the configuration files and they seemed to be perfect. Only the /etc/pam.d/common-session file didn't contain any lines. This file is common for all other PAM config files and it is inluded from them. So, how to check its contents? Remember to use the rpm command in such situations. To check the validity of the pam package I run:
  • rpm -V pam
It showed me that the file had to be changed:
  • S.5....T c /etc/pam.d/common-session
The config file was different from the original one. The difference were these two missing lines:
  • session required pam_limits.so
  • session required pam_unix2.so
Finally, I tried to replace the modified file with the one from the installation source, put the support of PAM modules back and checked the services. They started to work again.

What's the result? Don't forget to use the strong tools like rpm and remember that simple things can go wrong too.

Tuesday, December 18, 2007

How to manage services on RHEL/SLES identically?

If you are lazy enough just use the service and chkconfig commands. Both distribution has its own mechanism how services or related init scripts are managed from the command line and GUI. The service command is common for both of them and provides a way how to pass right arguments to the proper init script and run an action on a related service, e.g. check the status of a service, restart a service and so on. If you want to configure a runlevel the service should run in use the chkconfig command. The command shares a few arguments between our platforms.

On SLES you can use a symbolic link with prefix "rc" to the current init script. From GUI you can use YaST configuration tool and do the work with mouse. Or you can run an init script directly. For configuring run levels you can choose the insserv command. Short examples will explain the usage of commands on cron (crond on RHEL) service:
  • to check the status of cron service run
    1. rccron status
    2. /etc/init.d/cron status
    3. service cron status
  • to start the cron service (the same holds for stop or restart action) run
    1. /etc/init.d/cron start
    2. rccron start
    3. service cron start
  • to enable the cron service in runlevels defined in the header of its init script run
    1. insserv cron
    2. chkconfig --add cron
  • to enable the cron service in runlevels 2, 3 and 5 run
    1. chkconfig --level 235 cron on
RHEL provides ntsysv text based configurator. Its functionality is as the same as of the chkconfig command but with higher comfort. And from the GUI use system-config-services command. Now, follow the same examples:
  • to check the status of crond service
    1. /etc/init.d/crond status
    2. service crond status
  • to start the crond service (the same holds for stop or restart action)
    1. /etc/init.d/crond start
    2. service crond start
  • to enable the crond service in runlevels defined in the header of its init script run
    1. ntsysv
    2. chkconfig --add crond
  • to enable the crond service in runlevels 2, 3 and 5 run
    1. chkconfig --level 235 crond on
I must point out the usage of GUI through the examples was omitted due to obvious reason - it's straightforward. It's interesting to compare both systems and to find out their services are managable via common commands. I'm sure everybody knows how chkconfig is working but do you know about service command? It was hidden for me until now. I will use it.