Setup Xen 3.4.1 Dom0 on top of Ubuntu 9.04 Server via Marc – A. Dahlhaus’s UDEV patch

June 25, 2009

Per Marc – A. Dahlhaus:-
Udev removed the udevinfo symlink from versions higher than 123 and xens build-system could not detect if udev is in place and has the required version. In particular, Ubuntu 9.04 server has udev version
141 and appears to be affected by this issue. Straight forward Xen 3.4.1 build brings up Xen Host with hotplug scripts rejecting to work.
The recent patch suggested by Marc resolves this problem. It doesn’t
happen on F11 due to

[root@ServerXen341 /]# ls -l /usr/bin/udevinfo
lrwxrwxrwx. 1 root root 18 2009-06-12 14:13 /usr/bin/udevinfo -> ../../sbin/udevadm
[root@ServerXen341 /]# /usr/bin/udevinfo -V
the program '/bin/bash' called '/usr/bin/udevinfo', it should use 'udevadm info ', this will stop working in a future release

Brief description of Xen 3.4.1 build follows bellow. First – install on Uubuntu 9.04 Server all packages required for Xen build:-

apt-get install libcurl4-openssl-dev \
xserver-xorg-dev \
python2.6-dev \
mercurial gitk \
build-essential \
libncurses5-dev \
uuid-dev gawk \
gettext texinfo bcc

Second step :-

# cd /usr/src
# hg clone
# cd xen-3.4-testing.hg
Set in
PYTHON = python

Tuning results Xen packages to be placed into /usr/local/lib/python2.6/dist-packages due to Changeset 19594 in xen-3.4-testing.hg. Otherwise, Xen packages would go to /usr/lib/python2.6/site-packages, which is not default location for python 2.6 on Ubuntu 9.04 ( vs F11 ). Thus you won’t be able to start xend in Dom0. Same thing happens when building Xen Unstable on Ubuntu 9.04 Server.

Update on 07/01/09 . View changeset 19668
“Fix buildsystem to detect udev > version 124″.
Patching is no longer needed
Now build Xen 3.4.1 from source :-

# make install-xen
# make install-tools
# make install-stubdom

Updated on 10/05/09 due to changes in JF’s Git Repo
Install pvops enabled kernel from Jeremy Fitzhardinge git repository. Checkout the most recent branch:-

# git clone git:// linux-2.6-xen
# cd linux-2.6-xen

Make sure your your current branch is xen/master

# git branch

Setup Xen Dom0 Support

# make menuconfig

Now build kernel

# make -j(x)(number_of_kernels)
# make modules_install install
# mkinitramfs -o /boot/initrd-

Add entry to /boot/grub/menu.lst:-

title Xen 3.4 / Ubuntu 9.04 kernel
uuid 34d2c0bd-fe30-47e0-990e-4921caf1e845
kernel /boot/xen-3.4.gz
module /boot/vmlinuz- root=/dev/sdb2 ro console=tty0
module /boot/initrd-

Final step is Setup VNC at Dom0 to be able to manage Xen Dom0 with PVOPS kernel remotely via vinagre or vncviewer

Different option is to install xenified kernel via download :-
and applying Andrew Lyon’s rebased patches set

# wget

followed by building xenified kernel

# make menuconfig
# make -j(x)(number_of_cores)
# make modules_install install
# mkinitramfs -o /boot/initrd-

Remote vinagre console

The most recent screen shots for Xen 3.4.1-rc10 Dom0 with 2.6.31-rc4 pvops kernel on top of Ubuntu 9.04 Server

Setup Fedora 11 PV DomU at Xen 3.4.1 Dom0 (kernel 2.6.31-rc3) on top of Fedora 11

June 10, 2009

The most impressive F11 Xen related features seem to be the nice Xen 3.4.1 build with python 2.6 coming as default with F11 and graphical installer behavior during pygrub based PV DomU installation phase. F11 is supposed to be installed without libvirt to avoid conflict during Xen 3.4.1 port to Fedora 11 instance. Xen 3.3.1 Hypervisor working with libvirt may be installed on F11 via xen-3.3.1-11.fc11.src.rpm. Notice, that mentioned version of Xen 3.3.1 is already patched to work with pvops kernels and may be also patched for pygrub ZFS support. View [1] for details . As appears setting default path for python packages to /usr/lib/python2.6/site-packages resolves issues with Xen build raising up on Ubuntu 9.04 Server (/usr/local/lib/python2.6/dist-packages).
Update on 08/24/2009 . View the most recent post :-
Fedora 11 as the best target for Xen 3.4.1 & Libvirt 0.7.0-6 deployment
I have to notice that Libvirt 0.7.0-6 (in other words virt-install,virt-manager) been able to work with Xen 3.4.1 Hypervisor is obvious advantage F11 vs Ubuntu Carmic,Jaunty,Intrepid,Hardy Servers. Virt-install was broken in Hardy and afterwards was not supposed to work with Xen at all.
Dependencies unacceptable for Xen 3.4.1

yum install python-virtinst
. . . . . .
Dependencies Resolved
Package Arch Version Repository Size
python-virtinst noarch 0.400.3-8.fc11 fedora 401 k
Installing for dependencies:
iscsi-initiator-utils x86_64 fedora 750 k
libvirt x86_64 0.6.2-11.fc11 updates 1.8 M
libvirt-python x86_64 0.6.2-11.fc11 updates 116 k
qemu-img x86_64 2:0.10.4-4.fc11 updates 100 k
-> xen-libs x86_64 3.3.1-11.fc11 fedora 176 k
Transaction Summary
Install 6 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 3.4 M

Proceed with building Xen 3.4.1 Dom0 on top of F11.

# yum install gitk dev86 vnc-server bridge-utils
# cd /usr/src
# hg clone
# cd xen-3.4-testing.hg
# make xen
# make install-xen
# make tools
# make install-tools

Building pvops enabled kernel.

1.To checkout master branch:-

# git clone git:// linux-2.6-xen
# cd linux-2.6-xen
# git checkout origin/xen-tip/master -b xen-tip/master

2.To checkout the most recent branch:-

# git clone git:// linux-2.6-xen
# cd linux-2.6-xen
# git checkout origin/rebase/master -b rebase/master

To setup Xen Dom0 support :-

Activating Xen Dom0 Support for pvops kernel:-

1. Processor Type and features -> Paravirtualized guest support->Enable Xen Priveleged Domain Support <*>

2.Device Drivers -> Block Devices->
Xen Virtual Block Device Support <*>

3.Device Drivers -> [*] Backend driver support
<*>Block-device backend driver
<*> Xen backend network device
<*> Xen filesystem
[*] Create compatibility mount point /proc/xen
[*] Create xen entries under /sys/hypervisor
[*] userspace grant access device driver
[*] Staging drivers --->
[*] X86 Platform Specific Device Drivers --->

# make menuconfig

# make
# make modules_install install

Install xen-ified kernel :-

# wget
# tar -zxvf linux-2.6.29-xen-r4-aka-suse-xenified-2.6.29-62.1.tar.gz
# cd linux-2.6.29-xen-r4-aka-suse-xenified-2.6.29-62.1
# make O=~user1/build menuconfig
# make O=~user1/build
# make O=~user1/build modules_install install

Tuning xen-ified kernel :-

Subarchitecture Type (Enable Xen compatible kernel)
( ) PC-compatible
(X) Enable Xen compatible kernel
( ) Support for ScaleMP vSMP
Device Drivers --->
XEN --->
[*] Privileged Guest (domain 0)
<*>Backend driver support
<*>Block-device backend driver
<*>Block-device tap backend driver
<*> Network-device backend driver

Add to /etc/fstab :-

none /proc/xen xenfs defaults 0 0

Create a grub entry:-

title Xen 3.4 / Fedora kernel 2.6.30-rc6-tip
kernel /boot/xen-3.4.gz
module /boot/vmlinuz-2.6.30-rc6-tip root=/dev/mapper/vg_fedora11-LogVol00 ro console=tty0
module /boot/initrd-2.6.30-rc6-tip.img

Set initdefault to 3 in /etc/inittab and reboot in Xen environment,
having /etc/rc.local to start:-

export HOME=/root
vncserver :1 -geometry 1280x1024 -depth 16
/etc/init.d/xend start
/etc/init.d/xendomains start

Xend and xendomains would be better to setup running as services :-

# chkconfig xend on
# chkconfig xendomains on

View also :-
Remote Login with GDM and VNC on Fedora 11 regarding standard setup resumable VNC session. It seemed to me too much unstable. Several actions required root authorization caused VNC session to interrupt.
File /etc/gdm/custom.conf didn’t contain [daemon] section. There was no any instruction regarding RemoteGreeter settings.

Connect to Xen Host remotely via vncviewer

Bring up local Apache Server to create HTTP installation source.

# chkconfig httpd on
# service httpd start
# mount -o loop f11.iso /var/www/html/f11
# wget
# wget

Create installation profile:-

disk = ['phy:/dev/sdc7,xvda,w' ]
vif = [ 'bridge=eth0' ]
vfb = [ 'type=vnc,vncunused=1']
kernel = "/home/boris/fedora/vmlinuz"
ramdisk = "/home/boris/fedora/initrd.img"
on_reboot = 'restart'
on_crash = 'restart'

# xm create f11.install
# vncviewer localhost:0

This time graphical installer will be brought up with no issues (vs F10). Been prompted by installer about source : choose URL and submit http://IP-Dom0/f11

Load DomU via profile:-

disk = ['phy:/dev/sdc7,xvda,w' ]
vif = [ 'bridge=eth0' ]
vfb = [ 'type=vnc,vncunused=1']
bootloader = "/usr/bin/pygrub"
on_reboot = 'restart'
on_crash = 'restart'

# xm create f11.pyrun
# vncviewer localhost:0

OpenSolaris 2009.06 PV DomU at the same Xen Host:-

1. Backport ZFS support to Xen 3.3.1 F10 Dom0 (kernel 2.6.30-rc3-tip)

Backport ZFS Support for pygrub to Xen 3.3.1 providing via

June 7, 2009

Gitco is providing for free download. It gives an immediate option to back port Pygrub ZFS support from Xen 3.4-testing mercurial tree. Raw content of CSs 19322,19323 is supposed to be placed into patch files under /usr/src/redhat/SOURCES.
File /usr/src/redhat/SPECS/xen-3.3.1.spec gets updated to process two more patches added to SOURCES. The last step is to run rpmbuild to create RPMS with patches to be installed instead of original ones from Details follow bellow:-

# yum -y install transfig texi2html tetex-latex gtk2-devel libaio-devel gnutls-devel
# yum update ecryptfs-utils
[root@ServerXen ~]# wget
Connecting to||:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11381385 (11M) [application/x-redhat-package-manager]
Saving to: `xen-3.3.1-0.src.rpm'
100%[========================================================>] 11,381,385 82.0K/s in 97s
16:04:39 (115 KB/s) - `xen-3.3.1-0.src.rpm' saved [11381385/11381385]

[root@ServerXen ~]# rpm -iv xen-3.3.1-0.src.rpm

Edit correspondently:-

# vi /etc/yum.conf
# vi /etc/yum.repos.d/XEN.repo
name=CentOS-$releasever - XEN

Change directory to /usr/src/redhat and add required change sets to SOURCES as patches

[root@ServerXen redhat]# ls -l
total 40
drwxr-xr-x 3 root root 4096 Jun 7 13:07 BUILD
drwxr-xr-x 4 root root 4096 Jan 28 15:54 RPMS
drwxr-xr-x 2 root root 4096 Jun 7 13:01 SOURCES
drwxr-xr-x 2 root root 4096 Jun 7 13:06 SPECS
drwxr-xr-x 2 root root 4096 Jun 7 13:19 SRPMS
[root@ServerXen redhat]# cd SOURCES
[root@ServerXen SOURCES]# ls -l
total 11116
-rw-r--r-- 1 root root 1296 Jan 21 00:47 xen-3.3.1-config.patch
-rw-r--r-- 1 root root 1779 Jan 21 00:47 xen-3.3.1-dumpdir.patch
-rw-r--r-- 1 root root 1335 Jun 7 13:01 xen-3.3.1-hg19322.patch
-rw-r--r-- 1 root root 1392 Jun 7 13:01 xen-3.3.1-hg19323.patch
-rw-r--r-- 1 root root 2229 Jan 21 00:47 xen-3.3.1-hotplug-locking-rhel.patch
-rw-r--r-- 1 root root 7063 Jan 21 00:47 xen-3.3.1-initscripts.patch
-rw-r--r-- 1 root root 11329774 Jan 5 15:28 xen-3.3.1.tar.gz
-rwx------ 1 root root 325 Aug 26 2008 xen.sysconfig
[root@ServerXen SOURCES]# cd ../SPECS
[root@ServerXen SPECS]# ls -l
total 36
-rw-r--r-- 1 root root 35130 Jun 7 13:06 xen-3.3.1.spec

Modify spec file correspondently :-

[root@ServerXen SPECS]# vi xen-3.3.1.spec
. . . . .
Patch1: %{name}-%{version}-initscripts.patch
Patch2: %{name}-%{version}-hotplug-locking-rhel.patch
Patch3: %{name}-%{version}-dumpdir.patch
Patch4: %{name}-%{version}-config.patch
Patch5: %{name}-%{version}-hg19322.patch
Patch6: %{name}-%{version}-hg19323.patch
. . . . . . . . . . .
%patch1 -p1 -b .init
%patch2 -p1
%patch3 -p1
%patch4 -p1
%patch5 -p1
%patch6 -p1

Now build

[root@ServerXen SOURCES]# rpmbuild -ba ./xen-3.3.1.spec

When done install ( or reinstall patched RPMS)

[root@ServerXen SPECS]# cd ../RPMS/x86_64
[root@ServerXen x86_64]# ls -l
total 9724
-rwxr-xr-x 1 root root 131 Jun 7 13:38
-rw-r--r-- 1 root root 9268378 Jun 7 13:19 xen-3.3.1-0.x86_64.rpm
-rw-r--r-- 1 root root 228948 Jun 7 13:19 xen-debuginfo-3.3.1-0.x86_64.rpm
-rw-r--r-- 1 root root 260567 Jun 7 13:19 xen-devel-3.3.1-0.x86_64.rpm
-rw-r--r-- 1 root root 161287 Jun 7 13:19 xen-libs-3.3.1-0.x86_64.rpm
[root@ServerXen x86_64]# cat
yum install xen-3.3.1-0.x86_64.rpm \
xen-debuginfo-3.3.1-0.x86_64.rpm \
xen-devel-3.3.1-0.x86_64.rpm \
[root@ServerXen x86_64]# ./

OpenSolaris 2009.06 PV DomU running at Xen 3.3.1 Dom0 on CentOS 5.2 :-

OpenSolaris 2009.06 PV DomU running at Xen 3.3.1 Dom0 on CentOS 5.3.
Gitco’s system completely reinstalled with patched rpms:-

SSH connection to Xen 3.3.1 Dom0:-

Setup opensolaris 2009.06 PV DomU at Xen 3.5-unstable Dom0 ( kernel 2.6.30-rc6-tip)

June 3, 2009

Pygrub ZFS support for the most recent Sun Solaris Nevada and OpenSolaris images has been introduced into Xen 3.4
via change sets 19323,19322, what makes OpenSolaris 2009.06 PV DomU install pretty much straight forward vs it happened under Xen 3.3.1 Hypervisor. Backport this CS’s to Xen 3.3.1 would very helpful in my opinion.

Copy ramdisk and kernel to Dom0

[root@ServerXen isos]# cat
mount -o loop,ro osol-0906-x86.iso /mnt
cp /mnt/boot/amd64/x86.microroot /home/boris/solaris
cp /mnt/platform/i86xpv/kernel/amd64/unix /home/boris/solaris

Installation profile:-

[root@ServerXen solaris]# cat osol200906.install
name = "sol0906"
vcpus = 1
memory = "1024"
kernel = "/home/boris/solaris/unix"
ramdisk = "/home/boris/solaris/x86.microroot"
extra = "/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom"
disk = ['phy:/dev/loop0,6:cdrom,r','phy:/dev/sdb5,0,w']
vif = ['bridge=eth1']
on_shutdown = "destroy"
on_reboot = "destroy"
on_crash = "destroy"

When logged in as jack/jack:

$ mkdir .vnc
$ cp .Xclients .vnc/xstartup
$ vncserver
$ pfexec ifconfig -a

Connect via vncviewer from Dom0 to IP-DomU:1 and proceed
with install:-

Runtime profile for Xen 3.4 (3.5-unstable) Dom0

[root@ServerXen solaris]# cat os0906.pyrun
name = 'OS0L906'
memory = 2048
vcpus = 2
bootloader = '/usr/bin/pygrub'
disk = ['phy:/dev/sdb5,0,w']
vif = [ 'bridge=eth1' ]

VNC Setup

cat /etc/X11/gdm/custom.conf
# GDM Custom Configuration file.
# overrides: /usr/share/gdm/defaults.conf
# AllowRoot=true
# AllowRemoteRoot=true

Services restart:-

svcadm disable xvnc-inetd gdm
svcadm enable xvnc-inetd gdm

Failure to obtain IP via DHCP at boot up causes services error messages go to console and loosing login prompt.
Seems like old bug “failure chechsum offloading” is still affecting
OSOL 2009.06. At your earliest convenience add to /etc/system

set xnf:xnf_cksum_offload = 0

and reboot .
It happened to me on the box with Marvell Yukon PCI-E Gigabit
Ethernet 88E8056 in Dom0.


Get every new post delivered to your Inbox.