[openstack-dev] [infra][nova] Running NFV tests in CI

Sean Mooney work at seanmooney.info
Thu Jul 26 11:22:45 UTC 2018


On 24 July 2018 at 19:47, Clark Boylan <cboylan at sapwetik.org> wrote:
>
> On Tue, Jul 24, 2018, at 10:21 AM, Artom Lifshitz wrote:
> > On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan <cboylan at sapwetik.org> wrote:
> > > On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote:
> > >> Hey all,
> > >>
> > >> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI
> > >>
> > >> Intel has their NFV tests tempest plugin [1] and manages a third party
> > >> CI for Nova. Two of the cores on that project (Stephen Finucane and
> > >> Sean Mooney) have now moved to Red Hat, but the point still stands
> > >> that there's a need and a use case for testing things like NUMA
> > >> topologies, CPU pinning and hugepages.
> > >>
> > >> At Red Hat, we also have a similar tempest plugin project [2] that we
> > >> use for downstream whitebox testing. The scope is a bit bigger than
> > >> just NFV, but the main use case is still testing NFV code in an
> > >> automated way.
> > >>
> > >> Given that there's a clear need for this sort of whitebox testing, I
> > >> would like to humbly request a handful of nodes (in the 3 to 5 range)
> > >> from infra to run an "official" Nova NFV CI. The code doing the
> > >> testing would initially be the current Intel plugin, bug we could have
> > >> a separate discussion about keeping "Intel" in the name or forking
> > >> and/or renaming it to something more vendor-neutral.
> > >
> > > The way you request nodes from Infra is through your Zuul configuration. Add jobs to a project to run tests on the node labels that you want.
> >
> > Aha, thanks, I'll look into that. I was coming from a place of
> > complete ignorance about infra.
> > >
> > > I'm guessing this process doesn't work for NFV tests because you have specific hardware requirements that are not met by our current VM resources?
> > > If that is the case it would probably be best to start by documenting what is required and where the existing VM resources fall
> > > short.
> >
> > Well, it should be possible to do most of what we'd like with nested
> > virt and virtual NUMA topologies, though things like hugepages will
> > need host configuration, specifically the kernel boot command [1]. Is
> > that possible with the nodes we have?
>
> https://docs.openstack.org/infra/manual/testing.html attempts to give you an idea for what is currently available via the test environments.
>
>
> Nested virt has historically been painful because not all clouds support it and those that do did not do so in a reliable way (VMs and possibly hypervisors would crash). This has gotten better recently as nested virt is something more people have an interest in >getting working but it is still hit and miss particularly as you use newer kernels in guests. I think if we can continue to work together with our clouds (thank you limestone, OVH, and vexxhost!) we may be able to work out nested virt that is redundant across multiple >clouds. We will likely need individuals willing to keep caring for that though and debug problems when the next release of your favorite distro shows up. Can you get by with qemu or is nested virt required?

for what its worth the intel nfv ci has alway ran with nested virt
since we first set it up on ubuntu 12.04 all the way through the time
we ran it fedora 20- fedora 21 and it continue to use nested virt on
ubuntu 16.04
we have never had any issue with nested virt but the key to using it
correctly is you should always set the nova cpu mode to
host-passthrough if you use nested virt.

because of how we currently do cpu pinning/ hugepanges and numa
affinity in nova today to do this testign we have a hard requiremetn
on running kvm in devstack which mean we have a hard requirement for
nested virt.
there ware ways around that but the nova core team has previously
express there view that adding the code changes reqiured to allow the
use of qemu is not warrented for ci since we would also not be testing
the normal config
e.g. these feature are normaly only used when performance matters
whcih means you will be useing kvm not qemu.

i have tried to test ovs-dpdk in the upstream ci on 3 ocation in the
past (this being the most recent
https://review.openstack.org/#/c/433491/) but without nested virt that
didnt get very far.

>
> As for hugepages, I've done a quick survey of cpuinfo across our clouds and all seem to have pse available but not all have pdpe1gb available. Are you using 1GB hugepages? Keep in mind that the test VMs only have 8GB of memory total. As for booting with special kernel parameters you can have your job make those modifications to the test environment then reboot the test environment within the job. There is some Zuul specific housekeeping that needs to be done post reboot, we can figure that out if we decide to go down this route. Would your setup work with 2M hugepages?

the host vm does not need to be backed by hungepages at all. it does
need to have the cpuflags set to allow hugepages to be allocated
within the host vm.
we can test 98% of thing with 2mb hugepages so we do not need to
reboot or allocate them via the kernel commandline as we can allocate
2mb hugepages at runtime.
networking-ovs-dpdk does this for the nfv ci today and it works fine
for testing everything except booting vms with 1G pages.

the nova code paths we want to test are identical for 2MB hugepages
and 1G so we really done need to boot vms with 1G hugepages inside the
host vm.
to be able to test that we can boot vms with the pdpe1gb cpu flag set
we would need a host vm that has that flag set but we could make that
a condtional flag.
again the logic for setting an extra  cpu flag is idetical regarless
of the flag so we can proably test that usecase with another flag such
as PCID

the host vm whould however need to be a mulit numa node vm
(specifically at least 2 numa nodes) and because of the kvm
requirement for our gust vm we would need to also have  the host vm
running on kvm.
in theory you are ment to be able to run kvm inside xen but that has
never worked form me and geting nested virt to work with different
hyperviors is likely more hastle then it is worth.

for simplity the intel nfv ci was configured as follow.
1 phyical compute nodes had  either  40 cores (2 HT*2sockets*10cores
ivybridge) or 72 cores (2 HT*2sockets*18) and either 64 or 96 GB of
ram (cpus are cheap at intel but ram is expensive)
2 each host vm had 2 numa nodes (hw:numa_nodes=2)
3 each host vm had 2 sockets and 2 hyper thread per core  and 16-20
vcpus (hw:cpu_sockets=2,hw:cpu_threads=2)
4 each host vm was not pinned on the host (hw:cpu_policy=shared) this
was to allow over subscption of cpus
5 the host was configured with cpu_mode=host-passthough to allow
compiling dpdk with optimization enabled and to prevent
   any issues with nested virt.


the last 2 iteration of the ci(the one that has been runing for the
last 2 years) ran against a kolla-ansible deployed mitaka/newton
openstack using kernel ovs on ubuntu 16.04
so the host cloud does not need to be partcaly new to support all the
features. we learned over time that while not strictly required based
on the workload of running a ci using
hugepages for the host vm prevent memory fragmentation and since we
disallowed over subsciption of memory in the cloud it gave use a
performce boost without reducing or capasity.
as a result we started to move the host vms to be hugepage backed but
that was just an optimization not a requirement.

the guest vms do not need to have 16-20 core by the way. that just
allowed use to run more tempest tests in paralell we intially had
everything running in 8 core vms but we had to run tempest
without paralleisum and had to do some tweeks to the ovs-dpdk
configuration to make it use only 1 core instea of the 4 it would have
used in the host vm by default.

>
> >
> > > In general though we operate on top of donated cloud resources, and if those do not work we will have to identify a source of resources that would work.
> >
> > Right, as always it comes down to resources and money. I believe
> > historically Red Hat has been opposed to running an upstream third
> > party CI (this is by no means an official Red Hat position, just
> > remembering what I think I heard), but I can always see what I can do.

the intel nfv ci litrally came about be case intel and rehat were
collaberating on these features and there was a
direct request from the nova core team at the paris summit to have ci
before they could be accepted.

my team at intel were heiring at the time and we had just ordered in
some new server for that so intel volenterreed to run
an nfv ci. i grabed one of our 3 node dev clusters gave them to waldek
with the  2 node that ran our nightly build and that became the first
intel nfv ci 4 years ago runing
out of one of our dev labs with ovh web hosting for logs.

the last iteration of the nfv ci that i was directly respocible for
ran on 10 servers.
it tested every patch to
nova,os-vif,neutron,devstack,networking-ovs-dpdk,devstack-libvirt-qemu
plugin repo and our colled openstack plugins
with a target of reporting back in less then 2 hours. it was also ment
to run on all changes to networking-odl and networking-ovn but
that never got implemented before we tanstioning it out of my old team.

if we wanted to cover nova/ovs-cif/devstack and networking-ovs-dpdk
that could likely be handeled by just 3-4 64GB 40core servers though
the
latency would likely be higher. any server hardware that is nehalem or
newer has all fo the feature needed to test these feautres so that
basically
any dual socket x86 server produced since late 2008.

But if we could use vms from cloud providers that  supports the main
ci that would be an even better solution long term.
maintaining an openstack cloud under ci load is non trival. we had
always wanted to upstream the ci when posibel but1
nested virt and mult numa guests were always the blocker.

regards
sean

> >
> > [1]
> > https://docs.openstack.org/nova/latest/admin/huge-pages.html#enabling-huge-pages-on-the-host
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list