Set up Ubuntu Lucid Server PV DomU at Xen 4.0 (kernel-xen- on top of opensuse 11.3

Install “kotd” (kernel of the day) via kernel-xen- ( supporting udev 157). Next step supposed to be is to upgrade Xen 4.0 to support Lucid Grub2 syntax . What,actually, requires just one back port CS 21188 from xen-4.0-testing.hg, which is not in official 4.0 tarball been used for 11.3. The procedure bellow is pretty much standard and was described by me in details just to compare how much more flexible is patching Xen Hypervisor on Fedoras. In particular, all properly named by version rpms been built from src.rpm gets installed upgrading old ones right away with no questions and no additional zypper (yum) install run.
One more patch has been tested with Suse’s 11.3 version of Xen Hypervisor. It was ZFS 24 support per Mark Johnson ([1]).

# wget
# rpm -iv xen-4.0.0_21091_05-6.6.src.rpm
# cd /usr/src/packages/SOURCES

Create 21188-grub2-fix.patch with raw content of CS 21188:-

# cd ../SPECS

Update xen.spec
Version: 4.0.0_21091_05
# Old one 6.6
Release: 6.7
Patch44: 21188-grub2-fix.patch
%patch44 -p1


#zypper install LibVNCServer-devel SDL-devel acpica automake bin86 curl-devel dev86 \
graphviz latex2html libjpeg-devel libxml2-devel ncurses-devel openssl openssl-devel \
pciutils-devel python-devel texinfo transfig \
texlive texlive-latex \
glibc-32bit glibc-devel-32bit

# rpmbuild -bb ./xen.spec
# cd ../RPMS/x*
# zypper install xen-4.0.0_21091_05-6.7.x86_64.rpm \
xen-devel-4.0.0_21091_05-6.7.x86_64.rpm \
xen-doc-html-4.0.0_21091_05-6.7.x86_64.rpm \
xen-doc-pdf-4.0.0_21091_05-6.7.x86_64.rpm \
xen-kmp-default-4.0.0_21091_05_k2.6.34.0_12-6.7.x86_64.rpm \
xen-kmp-desktop-4.0.0_21091_05_k2.6.34.0_12-6.7.x86_64.rpm \
xen-libs-4.0.0_21091_05-6.7.x86_64.rpm \
xen-tools-4.0.0_21091_05-6.7.x86_64.rpm \

During first run xen-tools-4.0.0_21091_05-6.6.x86_64.rpm was removed
xen-tools-domU-4.0.0_21091_05-6.7.x86_64.rpm was installed

Second step was

# zypper install xen-tools-4.0.0_21091_05-6.7.x86_64.rpm
what caused removing xen-tools-domU-4.0.0_21091_05-6.7.x86_64.rpm

Finally on working system :-

linux-y4jf:/usr/src/packages/RPMS/x86_64 # rpm -qa|grep xen|grep -v kernel

Hence, the last line should be removed from for smooth Hypervisor upgrade.
Activated via YAST xend, xendomains, libvirtd and rebooted Xen Host

host : linux-y4jf
release :
version : #1 SMP 2010-07-30 10:41:56 +0200
machine : x86_64
nr_cpus : 4
nr_nodes : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 2833
hw_caps : bfebfbff:20100800:00000000:00000940:0408e3fd:00000000:00000001:00000000
virt_caps : hvm
total_memory : 8150
free_memory : 26
free_cpus : 0
max_free_memory : 5533
max_para_memory : 5529
max_hvm_memory : 5508
node_to_cpu : node0:0-3
node_to_memory : node0:26
node_to_dma32_mem : node0:26
max_node_id : 0
xen_major : 4
xen_minor : 0
xen_extra : .0_21091_05-6.7
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : 21091
xen_commandline : vgamode=0x31a vgamode=0x31a
cc_compiler : gcc version 4.5.0 20100604 [gcc-4_5-branch revision 160292] (SU
cc_compile_by : root
cc_compile_domain : site
cc_compile_date : Sat Jul 31 16:07:27 MSD 2010
xend_config_format : 4

Now /usr/bin/pygrub suppports Grub2’s syntax of Ubuntu 10.04
Create Ubuntu 10.04 Server HVM DomU :-

virt-install -n LucidHVM -r 2048 --hvm --vnc -f /dev/sdb5 -c /home/user1/lucidSRV.iso --debug

and LucidPVG.xml file to define Lucid Server PV DomU:

<domain type='xen'>
<clock offset='utc'/>
<disk type='block' device='disk'>
<driver name='phy'/>
<source dev='/dev/sdb5'/>
<target dev='xvda' bus='xen'/>
<interface type='bridge'>
<mac address='00:16:3e:77:0b:94'/>
<source bridge='br0'/>
<script path='/etc/xen/scripts/vif-bridge'/>
<target dev='vif3.0'/>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<target port='0'/>
<input type='mouse' bus='xen'/>
<graphics type='vnc' port='5900' autoport='yes'/>

Runtime snapshots OSOL 134 PV DomU been started via virt-manager ( ZFS 24 support )

Back port ZFS 24 support to Suse’s 11.3 Xen Hypervisor
1. Add fsimage-zfs-24.patch ([1]) to /usr/src/packages/SOURCES
2. Change xen.spec as follows :-

Version: 4.0.0_21091_05
# Old one 6.7
Release: 6.9
Patch44: 21188-grub2-fix.patch
Patch704: fsimage-zfs-24.patch
%patch44 -p1
%patch704 -p1


# rpmbuild -bb ./xen.spec

This time Hypervisor upgrade has been run as follows :-

linux-y4jf:/usr/src/packages/RPMS/x86_64 # cat
zypper install xen-4.0.0_21091_05-6.9.x86_64.rpm \
xen-devel-4.0.0_21091_05-6.9.x86_64.rpm \
xen-doc-html-4.0.0_21091_05-6.9.x86_64.rpm \
xen-doc-pdf-4.0.0_21091_05-6.9.x86_64.rpm \
xen-kmp-default-4.0.0_21091_05_k2.6.34.0_12-6.9.x86_64.rpm \
xen-kmp-desktop-4.0.0_21091_05_k2.6.34.0_12-6.9.x86_64.rpm \
xen-libs-4.0.0_21091_05-6.9.x86_64.rpm \

linux-y4jf:/usr/src/packages/RPMS/x86_64 # ./
Loading repository data…
Reading installed packages…
Resolving package dependencies…

The following packages are going to be upgraded:
xen xen-devel xen-doc-html xen-doc-pdf xen-kmp-default xen-kmp-desktop xen-libs xen-tools

8 packages to upgrade.
Overall download size: 14.3 MiB. After the operation, additional 10.0 KiB will be used.
Continue? [y/n/?] (y): y
Retrieving package xen-libs-4.0.0_21091_05-6.9.x86_64 (1/8), 694.0 KiB (2.8 MiB unpacked)
Installing: xen-libs-4.0.0_21091_05-6.9 [done]
Retrieving package xen-kmp-desktop-4.0.0_21091_05_k2.6.34.0_12-6.9.x86_64 (2/8), 738.0 KiB (4.2 MiB unpacked)
Installing: xen-kmp-desktop-4.0.0_21091_05_k2.6.34.0_12-6.9 [done]
Retrieving package xen-kmp-default-4.0.0_21091_05_k2.6.34.0_12-6.9.x86_64 (3/8), 719.0 KiB (4.0 MiB unpacked)
Installing: xen-kmp-default-4.0.0_21091_05_k2.6.34.0_12-6.9 [done]

Retrieving package xen-doc-pdf-4.0.0_21091_05-6.9.x86_64 (4/8), 1.3 MiB (1.5 MiB unpacked)
Installing: xen-doc-pdf-4.0.0_21091_05-6.9 [done]
Retrieving package xen-doc-html-4.0.0_21091_05-6.9.x86_64 (5/8), 190.0 KiB (422.0 KiB unpacked)
Installing: xen-doc-html-4.0.0_21091_05-6.9 [done]
Retrieving package xen-4.0.0_21091_05-6.9.x86_64 (6/8), 6.3 MiB (25.0 MiB unpacked)
Installing: xen-4.0.0_21091_05-6.9 [done]
Retrieving package xen-tools-4.0.0_21091_05-6.9.x86_64 (7/8), 3.6 MiB (16.5 MiB unpacked)
Installing: xen-tools-4.0.0_21091_05-6.9 [done]
Additional rpm output:
Updating etc/sysconfig/xend…
Updating etc/sysconfig/xendomains…

Retrieving package xen-devel-4.0.0_21091_05-6.9.x86_64 (8/8), 867.0 KiB (5.0 MiB unpacked)
Installing: xen-devel-4.0.0_21091_05-6.9 [done]



6 Responses to Set up Ubuntu Lucid Server PV DomU at Xen 4.0 (kernel-xen- on top of opensuse 11.3

  1. dude says:

    KVM is much faster to deploy.

  2. dbaxps says:

    Are you happy with KVM performance ?

    • dude says:

      I have to say yes. I have recently tried the new RHEV-M/H platform using KVM and SPICE protocol. Wow!, I was and still am impressed. The nice thing about KVM is that running different OS’s is easy. I can run OpenBSD,Win 7, and Ubuntu (all 64bit) with no modification on my Fedora PC.

      I still use Xen and was a big fan but I have gradually moved towards KVM and use it a lot now. For virtualizing Linux on Linux Xen/KVM performance is about the same but Windows is definitely quicker on KVM.

  3. ka_ says:

    I for one am very happy with KVM’s performance!
    That said, KVM is mostly for desktop virtualization, whereas Xen is more for server virtualization. As a desktop user I would never attempt Xen myself, but would in most cases use KVM, for Servers on the other hand, I would be more inclined to use Xen.

  4. Andrew Weber says:

    Great job with this. KVM best for desktop virt is all I’ve used

  5. Awesome little blog you got going on! 🙂

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: