[Openstack] openstack + libvirt + xl for xen
Xing Lin
linxingnku at gmail.com
Sat Nov 15 04:18:07 UTC 2014
Hi,
I got some help from xen-devel mailing list. The kernel crash is because of
qemu. We need to apply a patch to qemu to fix it. Currently, the patch is
not included in any of qemu stable releases (including 2.0.2, 2.1.2 and
2.2.0-rc1) yet. After I got this working, then I am able to launch a xen
guest domain from horizon, with Xen + libvirt. I have written a wiki page
for steps I have taken to setup a Xen + libvirt compute node and it is
available at the following link. Hope it may help others as well.
http://wiki.xenproject.org/wiki/Xen_via_libvirt_for_OpenStack_juno
Thanks for all help I have gotten from this community!
-Xing
On Mon, Nov 10, 2014 at 3:16 PM, Xing Lin <linxingnku at gmail.com> wrote:
> Hi,
>
> After compiling and installing xen-4.4 and reboot the machine, now I get
> the qemu process running in dom0. Unfortunately, kernel in dom0 crashes
> this time when I try to launch an instance from horizon. I am still able to
> create a xen guest from command line with virt-install (need to specify
> --disk driver_name=qemu).
>
> dmesg:
>
> [ 9443.130600] blkfront: xvda: flush diskcache: enabled; persistent
> grants: enabled; indirect descriptors: disabled;
> [ 9443.132818] xvda: xvda1
> [ 9444.604489] xen:grant_table: WARNING: g.e. 0x30 still in use!
> [ 9444.604496] deferring g.e. 0x30 (pfn 0xffffffffffffffff)
> [ 9444.604499] xen:grant_table: WARNING: g.e. 0x31 still in use!
> [ 9444.604502] deferring g.e. 0x31 (pfn 0xffffffffffffffff)
> [ 9444.604505] xen:grant_table: WARNING: g.e. 0x32 still in use!
> [ 9444.604508] deferring g.e. 0x32 (pfn 0xffffffffffffffff)
> ==== lots of them====
> [ 9444.604719] xen:grant_table: WARNING: g.e. 0xe still in use!
> [ 9444.604721] deferring g.e. 0xe (pfn 0xffffffffffffffff)
> [ 9444.604723] xen:grant_table: WARNING: g.e. 0xd still in use!
> [ 9444.604725] deferring g.e. 0xd (pfn 0xffffffffffffffff)
> [ 9448.325408] ------------[ cut here ]------------
> [ 9448.325421] WARNING: CPU: 5 PID: 19902 at
> /build/buildd/linux-3.13.0/arch/x86/xen/multicalls.c:129 xen_mc_flush+0x
> 1a9/0x1b0()
> [ 9448.325492] CPU: 5 PID: 19902 Comm: sudo Tainted: GF O
> 3.13.0-33-generic #58-Ubuntu
> [ 9448.325494] Hardware name: Dell Inc. PowerEdge R710/00W9X3, BIOS 2.1.15
> 09/02/2010
> [ 9448.325497] 0000000000000009 ffff8802d13d9aa8 ffffffff8171bd04
> 0000000000000000
> [ 9448.325501] ffff8802d13d9ae0 ffffffff810676cd 0000000000000000
> 0000000000000001
> [ 9448.325505] ffff88030418ffe0 ffff88031d6ab180 0000000000000010
> ffff8802d13d9af0
> [ 9448.325509] Call Trace:
> [ 9448.325518] [<ffffffff8171bd04>] dump_stack+0x45/0x56
> [ 9448.325523] [<ffffffff810676cd>] warn_slowpath_common+0x7d/0xa0
> [ 9448.325526] [<ffffffff810677aa>] warn_slowpath_null+0x1a/0x20
> [ 9448.325530] [<ffffffff81004e99>] xen_mc_flush+0x1a9/0x1b0
> [ 9448.325534] [<ffffffff81006b99>] xen_set_pud_hyper+0x109/0x110
> [ 9448.325538] [<ffffffff81006c3b>] xen_set_pud+0x9b/0xb0
> [ 9448.325543] [<ffffffff81177f06>] __pmd_alloc+0xd6/0x110
> [ 9448.325548] [<ffffffff81182218>] move_page_tables+0x678/0x720
> [ 9448.325552] [<ffffffff8117ddb7>] ? vma_adjust+0x337/0x7d0
> [ 9448.325557] [<ffffffff811c23fd>] shift_arg_pages+0xbd/0x1b0
> [ 9448.325560] [<ffffffff81181671>] ? mprotect_fixup+0x151/0x290
> [ 9448.325564] [<ffffffff811c26cb>] setup_arg_pages+0x1db/0x200
> [ 9448.325570] [<ffffffff81213edc>] load_elf_binary+0x41c/0xd80
> [ 9448.325576] [<ffffffff8131d503>] ? ima_get_action+0x23/0x30
> [ 9448.325579] [<ffffffff8131c7e2>] ? process_measurement+0x82/0x2c0
> [ 9448.325584] [<ffffffff811c2f7f>] search_binary_handler+0x8f/0x1b0
> [ 9448.325587] [<ffffffff811c44f7>] do_execve_common.isra.22+0x5a7/0x7e0
> [ 9448.325591] [<ffffffff811c49c6>] SyS_execve+0x36/0x50
> [ 9448.325596] [<ffffffff8172cc99>] stub_execve+0x69/0xa0
> [ 9448.325599] ---[ end trace 53ac16782e751c0a ]---
> [ 9448.347994] ------------[ cut here ]------------
> [ 9448.348004] WARNING: CPU: 1 PID: 19902 at
> /build/buildd/linux-3.13.0/mm/mmap.c:2736 exit_mmap+0x16b/0x170()
> ==== more message ignored ====
>
> -Xing
>
>
>
> On Thu, Nov 6, 2014 at 10:10 AM, Xing Lin <linxingnku at gmail.com> wrote:
>
>> Hi Bob,
>>
>> Thanks for your suggestions. My host machine(dom0) is running ubuntu14.04
>> and I installed xen hypervisor by running either "apt-get install
>> xen-hypervisor-4.4-amd64" or "apt-get install nova-compute-xen". I check
>> and do not find xencommons in /etc/init.d. Only xen and xendomains exist in
>> that dir. "apt-file search xencommons" does not return any package. No qemu
>> process is running in dom0.
>>
>> I am not sure how to move forward. I will try to compile and install xen
>> from source and see whether I can get xencommons installed. If you have
>> some suggestions with this, please let me know. Thanks,
>>
>>
>> I will try to follow the following two links. In their installation, they
>> seem to have xencommons installed.
>> http://wiki.sebeka.k12.mn.us/virt:xen_4.4_14.04
>> http://pravinchavan.wordpress.com/2013/08/14/xen-hypervisor-setup/
>>
>> -Xing
>>
>>
>>
>>
>> On Thu, Nov 6, 2014 at 6:09 AM, Bob Ball <bob.ball at citrix.com> wrote:
>>
>>> Hi Xing,
>>>
>>>
>>>
>>> It may be that the dom0 qemu helper process is not running. It seems a
>>> very similar issue was tracked down to this on the thread at
>>> http://lists.xen.org/archives/html/xen-users/2014-05/msg00134.html
>>>
>>>
>>>
>>> Ian C pointed out that this helper process is normally started by the
>>> xencommons script.
>>>
>>>
>>>
>>> I hope this solves the issue.
>>>
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Bob
>>>
>>>
>>>
>>> *From:* Xing Lin [mailto:linxingnku at gmail.com]
>>> *Sent:* 05 November 2014 21:38
>>> *To:* openstack at lists.openstack.org
>>> *Subject:* [Openstack] openstack + libvirt + xl for xen
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> I am aware that Xen via libvirt is in the group C support but since I am
>>> not able to install xenserver iso at compute machines I have, I have to
>>> consider to use xen with libvirt. I have three nodes, each one running
>>> ubuntu 14.04. I follow the instruction to install juno in ubuntu 14.04 and
>>> it works (I can create instances from horizon) when I use kvm as the
>>> hypervisor at the compute node . However, if I switch to use xen as the
>>> hypervisor, it will fail to create instances. Note that I am able to create
>>> a xen guest with virt-install from command line, with the following
>>> command. So, I believe I am quite close to get libvirt + xl work for
>>> openstack. Any help will be highly appreciated! Thanks,
>>>
>>>
>>>
>>> "$ virt-install --connect=xen:/// --name u14.04.3 --ram 1024 --disk
>>> u14.04.3.img,size=4 --location
>>> http://ftp.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/
>>> --network bridge=virbr0"
>>>
>>>
>>>
>>> libxl log when creating an instance, launched from horizon (failed).
>>>
>>> ------------------
>>>
>>> libxl: debug: libxl_create.c:1342:do_domain_create: ao 0x7f4e5c001890:
>>> create: how=(nil) callback=(nil) poller=0x7f4e5c000c50
>>>
>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>> vdev=sda spec.backend=qdisk
>>>
>>> libxl: debug: libxl_create.c:797:initiate_domain_create: running
>>> bootloader
>>>
>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>> vdev=(null) spec.backend=qdisk
>>>
>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>> vdev=xvda spec.backend=qdisk
>>>
>>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>>> w=0x7f4e5c001e20: deregister unregistered
>>>
>>> libxl: debug: libxl.c:2712:local_device_attach_cb: locally attaching
>>> qdisk /dev/xvda
>>>
>>> libxl: error: libxl_device.c:1224:libxl__wait_for_backend: Backend
>>> /local/domain/0/backend/qdisk/0/51712 not ready
>>>
>>> libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb: failed
>>> to attach local disk for bootloader execution
>>>
>>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>>> w=0x7f4e5c001f48: deregister unregistered
>>>
>>> libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
>>> unable to detach locally attached disk
>>>
>>> libxl: error: libxl_create.c:1022:domcreate_rebuild_done: cannot
>>> (re-)build domain: -3
>>>
>>> libxl: debug: libxl_event.c:1591:libxl__ao_complete: ao 0x7f4e5c001890:
>>> complete, rc=-3
>>>
>>> libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x7f4e5c001890:
>>> inprogress: poller=0x7f4e5c000c50, flags=ic
>>>
>>> libxl: debug: libxl_event.c:1563:libxl__ao__destroy: ao 0x7f4e5c001890:
>>> destroy
>>>
>>> xc: debug: hypercall buffer: total allocations:24 total releases:24
>>>
>>> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
>>>
>>> xc: debug: hypercall buffer: cache current size:2
>>>
>>> xc: debug: hypercall buffer: cache hits:20 misses:2 toobig:2
>>>
>>> libxl: debug: libxl_create.c:1342:do_domain_create: ao 0x7f4e540083e0:
>>> create: how=(nil) callback=(nil) poller=0x7f4e54007fb0
>>>
>>> ....
>>>
>>> --------------------
>>>
>>>
>>>
>>>
>>>
>>> libxl log when creating an instance with virt-install from command line
>>> (succeed).
>>>
>>> --------------------
>>>
>>> libxl: debug: libxl_create.c:1342:do_domain_create: ao 0x7f4e641b3ba0:
>>> create: how=(nil) callback=(nil) poller=0x7f4e640a94d0
>>>
>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>> vdev=xvda spec.backend=unknown
>>>
>>> libxl: debug: libxl_device.c:197:disk_try_backend: Disk vdev=xvda,
>>> backend phy unsuitable as phys path not a block device
>>>
>>> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk
>>> vdev=xvda, using backend qdisk
>>>
>>> libxl: debug: libxl_create.c:797:initiate_domain_create: running
>>> bootloader
>>>
>>> libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no
>>> bootloader configured, using user supplied kernel
>>>
>>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>>> w=0x7f4e641cd2f8: deregister unregistered
>>>
>>> libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
>>> placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=9,
>>> free_memkb=1293
>>>
>>> libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
>>> candidate with 1 nodes, 8 cpus and 1293 KB free selected
>>>
>>> domainbuilder: detail: xc_dom_allocate: cmdline="method=
>>> http://ftp.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/",
>>> features="(null)"
>>>
>>> libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path
>>> /var/lib/xen/virtinst-vmlinuz.mBNvPH
>>>
>>> domainbuilder: detail: xc_dom_kernel_file:
>>> filename="/var/lib/xen/virtinst-vmlinuz.mBNvPH"
>>>
>>> domainbuilder: detail: xc_dom_malloc_filemap : 5643 kB
>>>
>>> domainbuilder: detail: xc_dom_ramdisk_file:
>>> filename="/var/lib/xen/virtinst-initrd.gz.Br5nZP"
>>>
>>> domainbuilder: detail: xc_dom_malloc_filemap : 20758 kB
>>>
>>> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps
>>> xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>>
>>> domainbuilder: detail: xc_dom_parse_image: called
>>>
>>> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary
>>> loader ...
>>>
>>> domainbuilder: detail: loader probe failed
>>>
>>> domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader
>>> ...
>>>
>>> domainbuilder: detail: xc_dom_malloc : 18898 kB
>>>
>>> domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x5761bf -> 0x12749b8
>>>
>>> domainbuilder: detail: loader probe OK
>>>
>>> ...
>>>
>>> ------------------
>>>
>>>
>>>
>>> -Xing
>>>
>>>
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141114/92901162/attachment.html>
More information about the Openstack
mailing list