[openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.
Daniel P. Berrange
berrange at redhat.com
Thu Nov 13 17:32:42 UTC 2014
On Thu, Nov 13, 2014 at 09:28:01AM -0800, Dan Smith wrote:
> > Yep, it is possible to run the tests inside VMs - the key is that when
> > you create the VMs you need to be able to give them NUMA topology. This
> > is possible if you're creating your VMs using virt-install, but not if
> > you're creating your VMs in a cloud.
> I think we should explore this a bit more. AFAIK, we can simulate a NUMA
> system with CONFIG_NUMA_EMU=y and providing numa=fake=XXX to the guest
> kernel. From a quick check with some RAX folks, we should have enough
> control to arrange this. Since we can put a custom kernel (and
> parameters) into our GRUB configuration that pygrub should honor, I
> would think we could get a fake-NUMA guest running in at least one
> public cloud. Since HP's cloud runs KVM, I would assume we have control
> over our kernel and boot there as well.
> Is there something I'm missing about why that's not doable?
That sounds like something worth exploring at least, I didn't know about
that kernel build option until now :-) It sounds like it ought to be enough
to let us test the NUMA topology handling, CPU pinning and probably huge
pages too. The main gap I'd see is NUMA aware PCI device assignment
since the PCI <-> NUMA node mapping data comes from the BIOS and it does
not look like this is fakeable as is.
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
More information about the OpenStack-dev