Showing posts with label rhel. Show all posts
Showing posts with label rhel. Show all posts

Tuesday, May 17, 2011

CentOS 6?

I'm a big fan of CentOS project. I use it in production and I recommend it to the others as an enterprise ready Linux distro. I have to admit that I was quite disappointed by the behaviour of  project developers who weren't able to tell the community the reasons why the upcoming releases were and are so overdue. I was used to downloading CentOS  images one or two months after the current RHEL release was announced. The situation has changed with RHEL 5.6 which is available since January, 2011 but the corresponding CentOS was released not before April, 2011. It took about 3 months to release it instead of one or two as usual. By the way, the main news in RHEL 5.6 are:
  • full support for EXT4 filesystem (included in previous releases as technical preview)
  •  new version 9.7 of BIND nameserver supporting NSEC3 resource records in DNSSEC and new cryptographic algorithms in DNSEC and TSIG
  • new version 5.3 of PHP language
  • SSSD daemon centralizing identity management and authentication
More details on RHEL 5.6 are officically available here.

The similar or perhaps worse situation was around the release date of CentOS 6. As you know, RHEL 6 is available since November, 2011. I considered CentOS 6 almost dead after I read about transitions to Scientific Linux or about purchasing support from Red Hat and migrating the CentOS installations to RHEL . But according to this schedule people around CentOS seem to be working hard again and the CentOS 6 should be available at the end of May. I hope the project will continue as I don't know about better alternative to RHEL (RHEL clone) than CentOS. The question is how the whole, IMO unnecessary situation, will influence the reputation of the project.

Tuesday, May 3, 2011

Quickly - persistent modules loading on RHEL

The kernel modules required for booting the system up are part of an initial  ramdisk which is automatically loaded into the memory by a boot loader. The ramdisk contains enough modules to mount the root filesystem and to initialize essential devices like keyboard, console or   various expansion cards.  The boot process  then continues with running the init process.

During the next phase, the other modules referenced by the operating system  are loaded automatically.  The modules are called by their aliases specified and set in the /etc/modprobe.conf configuration file. The typical alias is e.g. eth0 for a network interface card or usb-controller for an USB controller.

If we need  to load some specific module during the system boot and there isn't a way to reference it we have a few choices how to do it.
  • Place a particular modprobe command to the /etc/rc.d/rc.local script which is called at the end of the whole boot process. But it is likely to be late at this phase.
  • Or better, place the command in the  /etc/rc.modules file which is read and executed by the /etc/rc.d/rc.sysinit initialization script during the system initialization phase. It may be better to load the modules as soon as possible.
The /etc/rc.modules does not exist by default, so at first create it and make it executable. I think the first method is commonly used by many of us but the second one is in my opinion more systematical.

Tuesday, January 18, 2011

YUM download only mode

How many times I was in a situation I needed to update a server with RHEL installed but I wasn't at site and I didn't have a way how to reboot the server after installing a new kernel or glibc package on it reliably? Yes, I have a test environment and I'm testing the updates on it but many installations are too critical to just run yum update -y and then shutdown -r now. On top of that, there are well known Murphy's laws which are able to damage more than we are able to imagine.

Instead of remote resolution of why the server is suddenly unresponsive I'm trying to prepare some offline update archive (if there isn't an update server available but this is another situation) and then during a site visit to apply it.

As I'm talking about RHEL I'm using YUM or Yellowdog Updater Modified for it. This tool is able to download updates locally without installing them if we have RHEL 5.x system. It only requires to install a download plugin which is part of yum-downloadonly package. Try to install it with

yum install yum-downloadonly

The next lines contain common commands that I use for downloading updates:

yum install PACKAGE_NAME.rpm -y --downloadonly
yum update -y --downloadonly

If we have a RHEL 4.x server we don't have this package and we need to install another package called yum-tools which contains similar tool yumdownloader.

yum install yum-tools -y

Here it is how to use the tool

yumdownloader PACKAGE_NAME.rpm

If we wan't to download all the available updates with yumdownloader we need to get a list of all packages with yum check-update and then to pass it to yumdownloader. You can do it from shell with sed, cut or awk commands or what would you prefer:

for PKG in `yum check-update | cut -d' ' -f1`; do
yumdownloader $PKG
done
For more detailed description of the tools and their parameters have a look at their man pages.

Tuesday, August 31, 2010

Red Hat Enterprise Linux 5.5 - what's new?

It's a few months since RHEL 5.5 was released (march, 2010). Despite this, I would like to point out the major changes and additions compared to the previous release RHEL 5.4. So what's new:
  • Kickstart installation - it is possible to exclude package groups in the same way like single packages.
  • KVM guests and Cluster Suite - management of KVM based virtual guests with Cluster Suite is supported.
  • SPICE - RHEL 5.5 includes components of Simple Protocol for Independent Computing Environments which is competitor for VMware's PCoIP or Citrix's HDX.
  • PCI passthrough - physical PCI devices attached to virtual guests are working better.
  • Huge page support - it is extended to virtual guests with libvirt.
  • Windows 7 support - new samba3x packages supporting Windows 7 are included.
For more details read the RHEL 5.5 official release notes.

Wednesday, September 2, 2009

Red Hat Enterprise Linux 5.4 released

Today, it was released a next minor version of Red Hat's flagship Linux distribution RHEL 5.4. Here it is a brief summary of new features and updates:
  • KVM hypervisor - Full support of Kernel-based Virtual Machine is included now. XEN support is included as well, but you can't use both XEN and KVM at the same time. Each hypervisor requires different kernel. You need to have 64b machine to run KVM. It supports RHEL 3/4/5 or Windows XP/2003/2008 as guests.
  • KVM paravirtualized drivers - They are available for Windows XP/2003/2008 in package virtio-win.
  • FUSE support - New version includes modules for Filesystem in Userspace (FUSE) and related utilities. Support for the XFS was added as well. It icnludes updates of CIFS and EXT4 filesystems.
  • Infiniband drivers - It contains some portions of prepared Open Fabrics Enterprise Distribution (OFED) 1.4.1.
New release of RHEL contains many other updates and enhancements which aren't mentioned here. For more details read the RHEL 5.4 official release notes.

Wednesday, August 19, 2009

Linux rc.local script

Sometimes, you need to run some commands during your Linux server startup. And you don't want to waste time with preparing valid init script now. The common task is to load some kernel module or to change speed of network interface and so on.

Red Hat distributions provides for this task rc.local script. You can find it in the directory /etc/rc.d. The script is executed after all the other init scripts. This is ensured with the proper START scripts linking to the /etc/rc.d/rc.local script:

/etc/rc.d/rc2.d/S99local
/etc/rc.d/rc4.d/S99local
/etc/rc.d/rc3.d/S99local
/etc/rc.d/rc5.d/S99local

SUSE distros like SLES or OpenSUSE provide similar mechanism. You have available two scripts. The before.local script should contain everything you want to run before runlevel is entered. The after.local script works like RedHat's rc.local script. It contains stuff which should be executed after runlevel is reached. The scripts don't exist by default, you need to create them at first in the directory /etc/init.d. They don't have to be set executable.

Besides this, the RedHat's rc.local script is executed only in runlevels 2, 3, 4 or 5. It is ignored in the single user mode. SUSE's version of after.local or before.local is interpreted during all runlevels including runlevel 1.

Tuesday, May 19, 2009

RHEL 4.8 released

Yesterday, a next minor version of Red Hat Enterprise Linux 4 was released. The new version 4.8 contains the foloowing updates and enhancements:
  • optimized drivers for RHEL 4 guests running on KVM hypervizor
  • SAMBA update for better interoperability with Windows world
  • new kernel tunables for better performance
For details, there are official release notes published at redhat.com.

Thursday, April 16, 2009

Linux kernel crash dumps with kdump

Kdump is official GNU/Linux kernel crash dumping mechanism. It is part of vanilla kernel. Before it, there exists some projects like LKCD for performing such things. But they weren't part of mainline kernel so you needed to patch the kernel or to rely on Linux distribution to include it. In the event of LKCD, it was difficult to configure it, especially which device to use for dumping.

The first notice about kexec (read what it is useful for and how to use it) in GNU/Linux kernel was in changelog of version 2.6.7. Kexec tool is prerequisite for kdump mechanism. Kdump was firstly mentioned in changelog of version 2.6.13.

How is it working? When the kernel crashed the new so called capture kernel is booted via kexec tool. The memory of previous crashed kernel is leaved intact and the capture kernel is able to capture it. In detail, first kernel needs to reserve some memory for capture kernel. It is used by capture kernel for booting. The consequence is the total system memory is lowered by reserverd memory size.

When the capture kernel is booted, the old memory is captured from the following virtual /proc files:
  • /proc/vmcore - memory content in ELF format
  • /proc/oldmem - really raw memory image!

Next, we will check how to initialize kdump mechanism, how to configure it and how to invoke it for testing purposes.

Thursday, March 12, 2009

Running Linux kexec

The generic form of kexec command looks like
kexec -l kernel_image --initrd=kernel_initrd --append=command_line_options
The command has available many other options but the presented ones are the most important. To start kernel reset, run
kexec -e
How does it work? Linux kernel is placed in memory at defined address offset. On x86 architecture, it begins at 0x100000. Kexec is capable to call and run another kernel in the context of current kernel. It copies the new kernel somewhere into memory, moves it into kernel dynamic memory and finally copies it to the final destination which is the offset and runs it - kernel is exchanged and the reset is performed. An example how to reset running SLES 10.x kernel follows
kversion=`uname -r`
kexec -l /boot/vmlinuz-$kversion --initrd=/boot/initrd-$kversion --append="`cat /proc/cmdline`"
kexec -e
The example for RHEL 5.x is slightly different:
kexec -l /boot/vmlinuz-$kversion --initrd=/boot/initrd-${kversion}.img --append="`cat /proc/cmdline`"

Does it have any drawbacks? As I said, there may be some buggy devices which won't work after kernel reset. Typically, there are troubles with VGAs and their video memory initialization which results in garbled console after reset. The recommendation is to use normal video mode for console. You can change it with vga parameter set to zero and passed as kernel options (e.g. SLES 10 uses video framebuffer by default)
vga=0
Next, the earlier version of kexec had stability issues on any other platform than x86. Today, kexec is uspported on x86, x86_64, ppc64 or ia64.

Tuesday, March 10, 2009

Fast linux reboot with kexec

Kexec is a GNU/Linux kernel feature which allows to perform kernel reboots faster. The time savings around a few minutes are the result of not performing BIOS procedures and hardware reinitialization (each hardware part - like SCSI/FC HBAs - may have own BIOS and POST which takes some amount of time to finish). As we have cold or warm reset we can newly say we have kernel reset.

The GNU/Linux boot process consists of several stages. The hardware stage, firmware stage and bootloader stage are kernel independent and are run in defined order. The hardware stage performs basic hardware tasks such device initialization and testing it. The firmware stage known on PCs as BIOS is in charge of hardware detection. The bootloader can be split into two parts. The first-level bootloader is like master boot record on PCs which calls second-level bootloader which is able to boot Linux kernel. The final stage is kernel stage.

Kexec is smart thing. It can bypass all listed stages up to kernel stage. That means it is able to bypass all the things connected with hardware and jump to the kernel stage directly. The risk is a likely unreliability of untouched devices, typically VGAs or some buggy cards.

What about requirements to try it? The kernel has to be kexec-capable plus you have to have installed kexec tools. It is not problem in today's Linux distributions. Both RHEL 5.x and SLES 10.x contains kexec-tools package which you have to install. Their production kernels are capable of doing kernel resets. On SLES 10, you can check the running kernel configuration for CONFIG_KEXEC variable.
zgrep CONFIG_KEXEC /proc/config.gz


Kexec is controlled with command line program kexec. The command takes defined values for kernel to be booted, its initrd and kernel parameters and starts the kernel reset.

Thursday, January 22, 2009

New RHEL 5.3

The next minor update of Red Hat Enterprise Linux was released recently. About its predecessor - RHEL 5.2 - I wrote here a few months ago.

So what news does it bring? Let's have a look at some of them:
  • it's mainly update release - there are updated packages providing auditd, NetworkManger or sudo
  • it contains many virtualization enhancements - the number of supported physical CPUs or maximum memory are increased, support of new Intel x86-64 CPUs is included
  • it is the first realase with OpenJDK JAVA implementation!!!
  • it contains enhanced Systemtap (aka dtrace for Linux)
For more details, there are official release notes and article from Red Hat NEWS.

Thursday, November 13, 2008

Red Hat prefers KVM to XEN! No doubt!

It's unbelievable but it's true! Red Hat in cooperation with AMD performs virtual machine live migration between different platforms - from Intel CPU to AMD cpu. You know, there are many difficulties to achieve it - like various extensions, instructions and so on.

So far, it was possible to migrate between processors of different family of one vendor only. Now, Red Hat can do it with RHEL and KVM which means Red Hat confirmed the replacement of XEN with KVM definitely. I wrote about it a few months ago here. The whole video story is published at youtube.

Monday, September 1, 2008

How to resize ext3 filesystem on RHEL 5.x

I didn't have a luck when I was looking for ext2online utility to resize ext3 filesystem online on RHEL 5.x (it is available on RHEL 4.x). Online means to resize it without requirement to unmount the filesystem. I went through the release notes but I didn't find any notes about it. Perhaps, I didn't read them carefully.

The ext2online tool can be used to resize ext2 filesystem but it has to be unmounted. The tool is able to resize ext3 filesystem online under the condition of kernel supports online resizing. More particularly, it is possible to do online enlarging only.

Alongside it, there exists another tool - resize2fs which is capable of ext2/ext3 filesystem resizing. But the filesystem has to be unmounted first. This is required on RHEL 4.x. If you try to resize a mounted ext3 filesytem on RHEL 4.x with the tool it will end with error "can't resize a mounted filesystem!".

So, how to resize ext3 filesystem on newer RHEL 5.x? Both tools belong to the e2fsprogs package which contains a set of tools for creating, checking, modifying, and correcting ext2/ext3 filesystems. On RHEL 4.x, the package contain both tools - ext2online in version 1.1.8 and resize2fs 1.35. On RHEL 5.x, it contains resize2fs only - version 1.39. The newer version supports online resizing in case of kernel supports it. Here is the summary how to resize ext2/ext3 online on RHEL platform:
  1. RHEL 4.x - use ext2online tool (e2fsprogs package)
  2. RHEL 5.x - use resize2fs tool (e2fsprogs package)
I don't consider necessary to write about these tools usage, it's simple and the tools have related man pages.

Wednesday, August 20, 2008

Quickly - how to download a file to the ESX 3.x service console?

The VMware ESX 3.x is missing wget package so you can't use wget command to download anything from the Internet as you wish. In spite of wget, the service console provides lwp-* tools which are simple perl scripts based on LWP and URI perl modules and which allow to do some basic tasks around the HTTP protocol.

The tools are part of perl-libwww-perl package. The package is installed by default. The most important tool is lwp-download which you can use for downloading files. Let's check the steps how to download something:
  1. esxcfg-firewall --allowOutgoing
    • allow outgoing connections from service console
  2. lwp-download http://dfn.dl..../apcupsd-3.14.4-1.el3.i386.rpm
    • download apcupsd package
  3. esxcfg-firewall --blockOutgoing
    • return firewall to the initial state
Beside this, the perl-libwww-perl package contains other tools like lwp-mirror, lwp-request and lwp-rget. Check their man pages for their usage.

Wednesday, July 23, 2008

VMware ESXi will be free

During a few days or weeks, VMware should release their lightweight hypervisor VMware ESXi for free. It is an enterprise-class hypervisor with footprint about 32MB which is integrated into modern servers through e.g. solid state disks. The small footprint is achieved by dropping so-called Console Operating System (based on RHEL 3). It includes basic functionalities like vSMP or VMFS and for advanced ones, you need to manage it with VMware VirtualCenter. You can download it from here.

Friday, July 18, 2008

RHEL and Infiniband - basic usage

As I written in the previous post, the /etc/init.d/openibd init script is in charge of starting Infiniband (IB) network. The script parses the /etc/ofed/openibd.conf configuration file where you can specify which ULPs should be initialized. By default, all ULPs I mentioned last time - ipoib, srp, sdp - are enabled.

The opensm IB network manager is controlled with the /etc/init.d/opensmd init script which is configurable via /etc/ofed/opensm.conf configuration file. You can turn on debugging here but it is not normally needed. It is more useful to enable verbose mode which increases the log verbosity level. The default log file is /var/log/osm.log. So, if something goes wrong enable verbose mode and check the log file.

After executing the init scripts, you should check the IB network state. The openibd script is started automatically during the system startup, while the opensm has to be enabled (with ntsysv or chkconfig). Follow this checklist:
  1. Is Mellanox HCA recognized?
    • check the output of lsmod | grep ib_mthca
    • check the output of dmesg
  2. Are appropriate ULPs loaded?
    • check the output of lsmod | grep ib_
      • should contain ib_ipoib, ib_srp, ib_sdp
  3. Is IB network initialized and working?
    • check the output of cat /sys/class/infiniband/mthca0/ports/X/state
      • should be ACTIVE
  4. Is ib0 network interface available?
    • check the output of ifconfig -a
If you passed all the checks you would be able to use IP protocol over IB network. I supposed you have two IB nodes in the IB network at least, both are configured the same way and both have passed the checks (like in the first article). To configure it follow the commands:
  1. assign an IP address to the nodes
    • run ifconfig ib0 IP_ADDR1 up at first node
    • run ifconfig ib0 IP_ADDR2 up at second node
  2. check the IPoIB functionality
    • run ping IP_ADDR2 from the first node
    • run ping IP_ADDR1 from the second node
So, wasn't it simple? If everything is working the ping should receive replies from the other side. Now, you can run any IP based application over IB - FTP, NFS and so on and utilize its benefits like high throughput and low latencies. Please, if you are interested in the topic leave me a comment.

Tuesday, July 15, 2008

Quickly - RPM uninstall and scriptlet failure

Sometimes it happens that I'm not able to uninstall a RPM package because of some internal SPEC file errors related to the scriptlets. Last time it happened when I was uninstalling the HP OpenView Storage Data Protector packages from a RHEL server. By mistake, I uninstalled one package which was a dependency of another package and after that I wasn't able to uninstall it due to that dependency and due to it wasn't checked correctly. The whole uninstall procedure looked like this:
  1. rpm -e OB2-CORE-A.06.00-1
  2. rpm -e OB2-DA-A.06.00-1
And the produced error follows:
  • ERROR: Cannot find /opt/omni//bin/omnicc
  • error: %preun(OB2-DA-A.06.00-1.x86_64) scriptlet failed, exit status 3
So, is there a way how to get rid of such a package? Yes, it is and it is simple, just disable executing the scriptlets like this:
  1. rpm -e --noscripts OB2-DA-A.06.00-1
I think it is pretty simple feature of RPM but it is a bit difficult to remember it.

Monday, July 7, 2008

RHEL and Infiniband - software intro

Let's continue with software introduction. As I wrote the switch is equipped with the ALOM remote management. There is an universal set of commands for platform independent management like password, poweroff, setupsc, resetsc and so on and then a set of commands which are more specific to the platform. In the case of our IB switch there are two such commands:
  1. setbp - for setting so-called blueprint of switch
  2. showbp - for showing the current blueprint
  3. there are five predefined blueprints:
    • 9 node, 12 node, 18 node, none and unmanaged
The natural question is what does the blueprint mean? According to official documentation it seems to be like a predefined configuration of the switch. You can change it with the setbp command which asks you if you want to run IB management software, how many hosts will be in the subnet and what is the subnet identifier. By default, if you use the switch preconfigured from the factory then two switches will have the same subnet ID. The trouble is, if you intend to configure some level of redundancy between IB switches you will have to have them in different subnets with different subnet IDs. I think it strange because I had to disable the IB management software otherwise I wasn't able to see the nodes in the fabric. As we will see, the IB mangement software including IB subnet manager doesn't seem to like the OFED included in RHEL distro (more about RHEL and OFED I wrote here).

What about the servers? I preinstalled them with CentOS 5.1 distribution (which is binary compatible with RHEL 5.1). The distribution contains the OFED implementation in version 1.2. The complete OFED implementation in CentOS is divided in a set of RPM packages. The platform dependent part of OFED that means kernel modules are distributed with kernel package. Let's make a quick summary of basic packages:
  1. kernel - contains IB hardware, IB core and IB ULP modules
    • ULP means Upper Level Protocol
    • everything is placed in the following directories:
      • /lib/modules/`uname -r`/kernel/drivers/infiniband/hw
      • /lib/modules/`uname -r`/kernel/drivers/infiniband/core
      • /lib/modules/`uname -r`/kernel/drivers/infiniband/ulp
    • currently there are supported only IB HCAs from Mellanox
    • the supported ULPs are
      • ipoib - IP over IB driver
      • srp - IB SCSI RDMA initiator driver
      • sdp - SDP driver
  2. openib - this package contains a lot of useful documentation and the important part is the OFED configuration file /etc/ofed/openib.conf and the init script /etc/init.d/openibd which takes care of activating/deactivating the IB network interfaces. Simply, it loads the IB core modules and specified ULP modules in the config.
  3. openib-diags - this package contains diagnostic tools for IB debugging, I will introduce them later.
  4. opensm - here we have our IB subnet manager. The package provides the init script /etc/init.d/opensmd for starting it and the /etc/ofed/opensm.conf configuration file.
  5. libibverbs - this package provides a library allowing userspace programs direct hardware access.
  6. libibcommon, libibmad, libibumad, opensm-libs - and finally library dependencies for the above packages.
I need to add that the OFED packages belongs to the System Environment/Libraries RPM group and they are not installed by default apart from the openib and libibverbs and of course kernel package. That's all for now and next time I'm going to describe how to work with it.


Tuesday, July 1, 2008

RHEL and Infiniband - hardware intro

In my two previous articles, I summarized a few facts about the Infiniband support in RHEL distros and included protocols - you can go through them from the following links - RHEL and Infiniband support and Infiniband, RDP, SDP.... Let's be more particular now.

My scenario was based on two servers Sun Fire X4200 M2 and one Infiniband (IB) switch Sun IB Switch 9P. The servers had installed Infiniband host channel adapters (HCA) Sun Dual Port 4x IB HCA to be able to communicate over the IB fabric. The switch provides nine IB compliant ports at dual speeds of 4X/12X what means that each port is able to deliver of 10/30Gbit raw bandwidth. What surprised me was that the switch management is like at the SUN SPARC midrange servers. Yes, it is ALOM and it is perfect because you can use the same interface and similar commands you are used to. By the way, the switch chassis looks like a regular SUN server.

The switch is equipped with the IB subnet manager (SM) which is required to initialize the IB hardware and to allow the communication over the IB fabric. Each IB subnet has to have at least one and each has unambiguous identifier (ID) over the fabric. To be complete, the fabric comprises defined subnets. In my opinion, the IB SM seems to be working like ARP cache and DHCP server in LANs. Each HCA in a fabric is globally identified with so-called node GUID which is like WWN in FC or MAC in LAN. The switch has own GUID as well. The ports of HCA have so-called port GUID. Now, when one HCA or its port want to communicate with another one in the subnet we need to have assigned some network address. This address is called LID or local identifier and the IB SM is in charge of assigning it to the members of the subnet. The conclusion is the LIDs are available inside the subnet only and the GUIDs are routable over the subnets of fabric.

But one thing confused me a bit. When you configure the switch you will need to remember setting its blueprint otherwise you will ask for trouble. I'm going to write about it in the next part.

Friday, June 20, 2008

Red Hat prefers KVM to XEN?

Wow, the situation around Red Hat's attitude to the virtualization maze seems to be more clear now. I thought that Red Hat is going to support solutions based on XEN hypervisor. In 2007, they released RHEL 5.0 and it was their first distro with integration of XEN hypervisor. And I was looking forward to it.

But Red Hat considered the XEN to be immature as well. According to the article published at www.virtualization.info the main reason for such decision was acquisition of XenSource by Citrix and the collaboration between Microsoft and Novell and other vendors interested in XEN.

A few days ago, Red Hat unveiled their new virtualization strategy based on embedding the KVM hypervisor to their RHEL distro. The official announcement is published here and summarized some advantages of it.

I think it is interesting news but with many unanswered but important questions. Will they support both of hypervisors? Or are they going to support KVM only since now? What about their customers who already adopted the XEN in their environments? In my opinion, it will be quite difficult to make it mainstream. Let's wait and we will see...