Issue with launching instance with OVS-DPDK
Sean Mooney
smooney at redhat.com
Tue Jan 29 21:46:15 UTC 2019
On Tue, 2019-01-29 at 18:05 +0000, David Lake wrote:
> Answers <DL> in-line </DL>
>
> Thanks
>
> David
>
> -----Original Message-----
> From: Sean Mooney <smooney at redhat.com>
> Sent: 29 January 2019 14:55
> To: Lake, David (PG/R - Elec Electronic Eng) <d.lake at surrey.ac.uk>; openstack-dev at lists.openstack.org
> Cc: Ge, Chang Dr (Elec Electronic Eng) <c.ge at surrey.ac.uk>
> Subject: Re: Issue with launching instance with OVS-DPDK
>
> On Mon, 2019-01-28 at 13:17 +0000, David Lake wrote:
> > Hello
> >
> > I’ve built an Openstack all-in-one using OVS-DPDK via Devstack.
> >
> > I can launch instances which use the “m1.small” flavour (which I have
> > modified to include the hw:mem_size large as per the DPDK instructions) but as soon as I try to launch anything more
> > than m1.small, I get this error:
> >
> > Jan 28 12:56:52 localhost nova-conductor: #033[01;31mERROR
> > nova.scheduler.utils [#033[01;36mNone req-917cd3b9-8ce6-
> > 41af-8d44-045002512c91 #033[00;36madmin admin#033[01;31m]
> > #033[01;35m[instance: 25cfee28-08e9-419c-afdb-4d0fe515fb2a]
> > #033[01;31mError from last host: localhost (node localhost): [u'Traceback (most recent call last):\n', u' File
> > "/opt/stack/nova/nova/compute/manager.py", line 1935, in _do_build_and_run_instance\n filter_properties,
> > request_spec)\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 2215, in _build_and_run_instance\n
> > instance_uuid=instance.uuid, reason=six.text_type(e))\n',
> > u'RescheduledException: Build of instance 25cfee28-08e9-
> > 419c-afdb-4d0fe515fb2a was re-scheduled: internal error: qemu
> > unexpectedly closed the monitor: 2019-01- 28T12:56:48.127594Z
> > qemu-kvm: -chardev
> > socket,id=charnet0,path=/var/run/openvswitch/vhu46b3c508-f8,server:
> > info: QEMU waiting for connection on:
> > disconnected:unix:/var/run/openvswitch/vhu46b3c508-f8,server\n2019-01-
> > 28T12:56:49.251071Z
> > qemu-kvm: -object
> > memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/
> > libvirt/qemu/4-instance-
> > 00000005,share=yes,size=4294967296,host-nodes=0,policy=bind:
> > os_mem_prealloc: Insufficient free host memory pages available to
> > allocate guest RAM\n']#033[00m#033[00m
> >
> >
> > My Hypervisor is reporting 510.7GB of RAM and 61 vCPUs.
>
> how much of that ram did you allocate as hugepages.
>
> <DL> OVS_NUM_HUGEPAGES=3072 </DL>
ok so you used networking-ovs-dpdks ablitiy to automatically
allocate 2MB hugepages at runtime so this should have allocate 6GB
of hugepages per numa node.
>
> can you provide the output of cat /proc/meminfo
>
> <DL>
>
> MemTotal: 526779552 kB
> MemFree: 466555316 kB
> MemAvailable: 487218548 kB
> Buffers: 2308 kB
> Cached: 22962972 kB
> SwapCached: 0 kB
> Active: 29493384 kB
> Inactive: 13344640 kB
> Active(anon): 20826364 kB
> Inactive(anon): 522012 kB
> Active(file): 8667020 kB
> Inactive(file): 12822628 kB
> Unevictable: 43636 kB
> Mlocked: 47732 kB
> SwapTotal: 4194300 kB
> SwapFree: 4194300 kB
> Dirty: 20 kB
> Writeback: 0 kB
> AnonPages: 19933028 kB
> Mapped: 171680 kB
> Shmem: 1450564 kB
> Slab: 1224444 kB
> SReclaimable: 827696 kB
> SUnreclaim: 396748 kB
> KernelStack: 69392 kB
> PageTables: 181020 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 261292620 kB
> Committed_AS: 84420252 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed: 1352128 kB
> VmallocChunk: 34154915836 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 5365760 kB
> CmaTotal: 0 kB
> CmaFree: 0 kB
> HugePages_Total: 6144
since we have 6144 total and OVS_NUM_HUGEPAGES was set to 3072
this indicate the host has 2 numa nodes
> HugePages_Free: 2048
and you currently have 4G of 2MB hugepages free.
however this will also be split across numa nodes.
the qemu commandline you provied which i have coppied below
is trying to allocate 4G of hugepage memory from a single host numa node
qemu-kvm: -object
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/
libvirt/qemu/4-instance-
00000005,share=yes,size=4294967296,host-nodes=0,policy=bind:
os_mem_prealloc: Insufficient free host memory pages available to
allocate guest RAM\n']#033[00m#033[00m
as a result the vm is failing to boot because nova cannot create the
vm with a singel numa node.
if you set hw:numa_nodes=2 this vm would likely boot but since you have
a 512G hostyou should be able to increase OVS_NUM_HUGEPAGES
to something like OVS_NUM_HUGEPAGES=14336.
this will allocate 60G of 2MB hugepages total.
if you want to allocate more then about 96G of hugepages you should
set OVS_ALLOCATE_HUGEPAGES=False and instead allcoate the hugepages
on the kernel commandline using 1G hugepages.
e.g. default_hugepagesz=1G hugepagesz=1G hugepages=480
This is becase it take a long time for ovs-dpdk to scan all the hugepages
on start up.
setting default_hugepagesz=1G hugepagesz=1G hugepages=480 will leave 32G of ram for the host.
if it a comptue node and not a contorller you can safly reduce the
the free host ram to 16G
e.g. default_hugepagesz=1G hugepagesz=1G hugepages=496
i would not advice allocating much more above than 496G of hugepages
as the qemu emularot over head can eaially get into the 10s of gigs
if you have 50+ vms running.
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> DirectMap4k: 746304 kB
> DirectMap2M: 34580480 kB
> DirectMap1G: 502267904 kB
> [stack at localhost devstack]$
>
> </DL>
>
> >
> > Build is the latest git clone of Devstack.
> >
> > Thanks
> >
> > David
>
>
More information about the openstack-discuss
mailing list