[Openstack] 回复: LXC on Grizzly
hzguanqiang at corp.netease.com
hzguanqiang at corp.netease.com
Mon Aug 12 02:36:31 UTC 2013
Hi yong,
It seems you didn't mount all of the 'cpuacct', 'devices' & 'memory' cgroups. You can see If you missed mounting one of them in the cgroup dir "/sys/fs/cgroup" or execute command "mount" to see which cgroup you mounted.
Thanks
On 2013-08-11 15:31 , 刁民 wrote:
hello
I have a problem when trying to get LXC running on CentOS6.4 + OpenStack Grizzly .
I did flowing things
1. Change nova.conf ,"libvirt_type=lxc"
2.compile nbd and qemu-nbd
3.create image and update" --property hypervisor_type=lxc"
4."mount none -t cgroup -o cpuacct,memory,devices,cpu,freezer,blkio /cgroup",based on https://wiki.openstack.org/wiki/LXC
When I launch instance,something still wrong.I got the following error message from nova-compute node (I am not quite sure whether there are other error msgs or not):
2013-08-11 14:51:52.556 ERROR nova.compute.manager [req-d04d50c4-e77f-4b84-af1d-912a3b3e1f00 70fdefb4da394e789641bb96e40cb649 20220930f9e74cfab0185eb8c7623fb7] [instance: c0211c19-3385-4fac-928a-5b821746cd82] Instance failed to spawn
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] Traceback (most recent call last):2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1103, in _spawn
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] block_device_info)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1528, in spawn
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] block_device_info)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2444, in _create_domain_and_network
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] domain = self._create_domain(xml, instance=instance)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2405, in _create_domain
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] domain.createWithFlags(launch_flags)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] result = proxy_call(self._autowrap, f, *args, **kwargs)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] rv = execute(f,*args,**kwargs)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] rv = meth(*args,**kwargs)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] File "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in createWithFlags
2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)2013-08-11 14:51:52.556 4640 TRACE nova.compute.manager [instance: c0211c19-3385-4fac-928a-5b821746cd82] libvirtError: internal error The 'cpuacct', 'devices' & 'memory' cgroups controllers must be mounted
2013-08-11 14:52:40.804 ERROR nova.compute.manager [req-d04d50c4-e77f-4b84-af1d-912a3b3e1f00 70fdefb4da394e789641bb96e40cb649 20220930f9e74cfab0185eb8c7623fb7] [instance: c0211c19-3385-4fac-928a-5b821746cd82] Error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 848, in _run_instance\n set_access_ip=set_access_ip)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1107, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1103, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1528, in spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2444, in _create_domain_and_network\n domain = self._create_domain(xml, instance=instance)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2405, in _create_domain\n domain.createWithFlags(launch_flags)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call\n rv = execute(f,*args,**kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker\n rv = meth(*args,**kwargs)\n', ' File "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in createWithFlags\n if ret == -1: raise libvirtError (\'virDomainCreateWithFlags() failed\', dom=self)\n', "libvirtError: internal error The 'cpuacct', 'devices' & 'memory' cgroups controllers must be mounted\n"]
but I do mount subsys on /cgroup.Has anybody seen the same problem and any solution?
Thanks,
Yoon
------------------
Best regards!
GuanQiang
10:31:58
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130812/b7cd2a7c/attachment.html>
More information about the Openstack
mailing list