[Openstack-operators] Xen With Openstack
Roberto Dalas Z. Benavides
betodalas at gmail.com
Thu Oct 27 08:22:17 UTC 2011
Thanks Giuseppe. I will try it.
2011/10/26 Giuseppe Civitella <giuseppe.civitella at gmail.com>
> I'm currently using Diablo
> (2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2)
> and it works with XenServer 5.6 (it should with XCP 1.1 too).
> I did non try yet Essex, sorry.
>
>
>
> 2011/10/26 Roberto Dalas Z. Benavides <betodalas at gmail.com>:
> > My Openstack Versions is
> >
> > 2012.1-dev (2012.1-LOCALBRANCH:LOCALREVISION)
> >
> > Is correct?
> >
> > Which version is more stable?
> >
> > 2011/10/26 Giuseppe Civitella <giuseppe.civitella at gmail.com>
> >>
> >> If imagem.vhd is a gzipped tar archive containing a file called
> >> image.vhd, your command should work this way:
> >> glance add name=lucid_ovf disk_format=vhd container_format=ovf
> >> is_public=True < imagem.vhd
> >>
> >>
> >>
> >> 2011/10/26 Roberto Dalas Z. Benavides <betodalas at gmail.com>:
> >> > Can i use the command ?
> >> >
> >> > add name = glance lucid_ovf disk_format= vhd container_format vhd =ovf
> >
> >> > is_public True < imagem.vhd
> >> >
> >> > 2011/10/26 Giuseppe Civitella <giuseppe.civitella at gmail.com>
> >> >>
> >> >> It has to be a vhd image.
> >> >> You can try XenConverter to get a vhd from a vmdk.
> >> >>
> >> >> Cheers,
> >> >> Giuseppe
> >> >>
> >> >> 2011/10/26 Roberto Dalas Z. Benavides <betodalas at gmail.com>:
> >> >> > I have an image vmdk and am doing the following:
> >> >> > add name = glance lucid_ovf disk_format container_format vhd = = =
> >> >> > OVF
> >> >> > is_public True <imagem.vmdk. This is right or need to be a vhd
> image?
> >> >> >
> >> >> > Thanks
> >> >> >
> >> >> > 2011/10/26 Giuseppe Civitella <giuseppe.civitella at gmail.com>
> >> >> >>
> >> >> >> Yes, the nova-compute service has to run on a domU.
> >> >> >> You need to install XenServer's plugins on dom0 (have a look here:
> >> >> >> http://wiki.openstack.org/XenServerDevelopment).
> >> >> >> The domU will tell the dom0 to deploy images via xenapi.
> >> >> >> You need to extract you vhd image, rename it image.vhd and then
> gzip
> >> >> >> it.
> >> >> >> Glance plugin on XenServer expect vhd images to be gzipped, so if
> >> >> >> you
> >> >> >> don't compress them the deploy process will fail.
> >> >> >>
> >> >> >> Cheers,
> >> >> >> Giuseppe
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides <betodalas at gmail.com>:
> >> >> >> > A doubt, the new server, compute, must be within a XenServer
> >> >> >> > virtual
> >> >> >> > machine?
> >> >> >> > The image must actually be as gzip, or you can get on the same
> >> >> >> > Glance
> >> >> >> > as
> >> >> >> > vhd?
> >> >> >> >
> >> >> >> > 2011/10/26 Giuseppe Civitella <giuseppe.civitella at gmail.com>
> >> >> >> >>
> >> >> >> >> Hi,
> >> >> >> >>
> >> >> >> >> did you check what happens on XenServer's dom0?
> >> >> >> >> Are there some pending gzip processes?
> >> >> >> >> Deploy of vhd images can fail if they're are not properly
> >> >> >> >> created.
> >> >> >> >> You can find the rigth procedure here:
> >> >> >> >> https://answers.launchpad.net/nova/+question/161683
> >> >> >> >>
> >> >> >> >> Hope it helps
> >> >> >> >> Giuseppe
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> 2011/10/26 Roberto Dalas Z. Benavides <betodalas at gmail.com>:
> >> >> >> >> > Hello, I installed Compute and New Glance a separate server.
> >> >> >> >> > I'm
> >> >> >> >> > trying
> >> >> >> >> > to
> >> >> >> >> > create VM on Xen by Dashboard. The panel is the pending
> status
> >> >> >> >> > logs
> >> >> >> >> > and
> >> >> >> >> > shows that XenServer's picking up the image of the Glance,
> but
> >> >> >> >> > the
> >> >> >> >> > machine
> >> >> >> >> > is not created. Follow the log:
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> [20111024T14:18:04.052Z|debug|xenserver-opstack|746566|Async.host.call_plugin
> >> >> >> >> > R:cdbc860b307a|audit] Host.call_plugin host =
> >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3
> >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'glance'; fn =
> >> >> >> >> > 'download_vhd';
> >> >> >> >> > args
> >> >> >> >> > = [
> >> >> >> >> > params: (dp0
> >> >> >> >> > S'auth_token'
> >> >> >> >> > p1
> >> >> >> >> > NsS'glance_port'
> >> >> >> >> > p2
> >> >> >> >> > I9292
> >> >> >> >> > sS'uuid_stack'
> >> >> >> >> > p3
> >> >> >> >> > (lp4
> >> >> >> >> > S'a343ec2f-ad1c-4632-b7d9-1add8051c241'
> >> >> >> >> > p5
> >> >> >> >> > aS'4b27c364-6626-4541-896a-65fb0d0b01d3'
> >> >> >> >> > p6
> >> >> >> >> > asS'image_id'
> >> >> >> >> > p7
> >> >> >> >> > S'4'
> >> >> >> >> > p8
> >> >> >> >> > sS'glance_host'
> >> >> >> >> > p9
> >> >> >> >> > S'10.168.1.30'
> >> >> >> >> > p10
> >> >> >> >> > sS'sr_path'
> >> >> >> >> > p11
> >> >> >> >> > S'/var/run/sr-mount/536897fe-f37b-c2b0-6eb1-4763bb3bd667'
> >> >> >> >> > p12
> >> >> >> >> > s. ]
> >> >> >> >> > [20111024T14:18:24.251Z|
> >> >> >> >> > info|xenserver-opstack|746637|Async.host.call_plugin
> >> >> >> >> > R:223f6eebc13d|dispatcher] spawning a new thread to handle
> the
> >> >> >> >> > current
> >> >> >> >> > task
> >> >> >> >> > (tr
> >> >> >> >> > ackid=a043138728544674d13b8d4a8ff673f7)
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> [20111024T14:18:24.251Z|debug|xenserver-opstack|746637|Async.host.call_plugin
> >> >> >> >> > R:223f6eebc13d|audit] Host.call_plugin host =
> >> >> >> >> > '9b3736e1-18ef-4147-8564-a9c64ed3
> >> >> >> >> > 4f1b (xenserver-opstack)'; plugin = 'xenhost'; fn =
> >> >> >> >> > 'host_data';
> >> >> >> >> > args
> >> >> >> >> > =
> >> >> >> >> > [ ]
> >> >> >> >> > [20111024T14:18:24.402Z|debug|xenserver-opstack|746640
> >> >> >> >> > unix-RPC||cli]
> >> >> >> >> > xe
> >> >> >> >> > host-list username=root password=null
> >> >> >> >> >
> >> >> >> >> > Follow the nova.conf:
> >> >> >> >> >
> >> >> >> >> > --dhcpbridge_flagfile=/etc/nova/nova.conf
> >> >> >> >> > --dhcpbridge=/usr/bin/nova-dhcpbridge
> >> >> >> >> > --logdir=/var/log/nova
> >> >> >> >> > --state_path=/var/lib/nova
> >> >> >> >> > --lock_path=/var/lock/nova
> >> >> >> >> > --verbose
> >> >> >> >> >
> >> >> >> >> > #--libvirt_type=xen
> >> >> >> >> > --s3_host=10.168.1.32
> >> >> >> >> > --rabbit_host=10.168.1.32
> >> >> >> >> > --cc_host=10.168.1.32
> >> >> >> >> > --ec2_url=http://10.168.1.32:8773/services/Cloud
> >> >> >> >> > --fixed_range=192.168.1.0/24
> >> >> >> >> > --network_size=250
> >> >> >> >> > --ec2_api=10.168.1.32
> >> >> >> >> > --routing_source_ip=10.168.1.32
> >> >> >> >> > --verbose
> >> >> >> >> > --sql_connection=mysql://root:status64@10.168.1.32/nova
> >> >> >> >> > --network_manager=nova.network.manager.FlatManager
> >> >> >> >> > --glance_api_servers=10.168.1.32:9292
> >> >> >> >> > --image_service=nova.image.glance.GlanceImageService
> >> >> >> >> > --flat_network_bridge=xenbr0
> >> >> >> >> > --connection_type=xenapi
> >> >> >> >> > --xenapi_connection_url=https://10.168.1.31
> >> >> >> >> > --xenapi_connection_username=root
> >> >> >> >> > --xenapi_connection_password=status64
> >> >> >> >> > --reboot_timeout=600
> >> >> >> >> > --rescue_timeout=86400
> >> >> >> >> > --resize_confirm_window=86400
> >> >> >> >> > --allow_resize_to_same_host
> >> >> >> >> >
> >> >> >> >> > New log-in information compute.log shows cpu, memory, about
> Xen
> >> >> >> >> > Sevres,
> >> >> >> >> > but
> >> >> >> >> > does not create machines.
> >> >> >> >> >
> >> >> >> >> > Thanks
> >> >> >> >> > _______________________________________________
> >> >> >> >> > Openstack-operators mailing list
> >> >> >> >> > Openstack-operators at lists.openstack.org
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >
> >> >> >
> >> >
> >> >
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20111027/5d20afb9/attachment-0002.html>
More information about the Openstack-operators
mailing list