We have had similar debates for the host passthrough and disk cache options in Nova which are set in nova.conf. We find that the application launcher should determine this (may be subject to an administrator override).
Options like guest agent seem reasonable to be on the image (since they need code).
Tim
On 4/30/15, 3:21 AM, "Blair Bethwaite" blair.bethwaite@gmail.com wrote:
Thanks Daniel,
I'll fix that, need to try my hand at doc patches.
Whilst I've got your attention, any comment on my aside about it seeming odd from userland perspective that these (and similar tunables) are controlled through flavor/image extra specs? "Does anyone else find it strange that all these options are designed to be flavor and/or image extra specs rather than providing a mechanism to set them on instance boot (e.g. hints). I think the relationship between flavors or images and such tunables is tenuous at best, why should I need multiple versions of what is otherwise the same image or flavor in order to ask for e.g. CPU pinning or Qemu Guest Agent, these are per instance tunables/variables."
Cheers,
On 30 April 2015 at 01:31, Daniel P. Berrange berrange@redhat.com wrote:
On Wed, Apr 29, 2015 at 11:12:54AM -0400, Steve Gordon wrote:
Adding Dan and Nikola as I doubt they are on this list, guys this is in reference to this devref example which looks a little off:
http://docs.openstack.org/developer/nova/devref/testing/libvirt-numa.htm l#testing-instance-boot-with-1-numa-cell-requested
Yes, sorry my bad. That XML example is wrong - it is the XML you'd see from a hw:numa_nodes=2 config, not hw:numa_nodes=1
----- Original Message -----
From: "Blair Bethwaite" blair.bethwaite@gmail.com To: "Pradeep Kiruvale" pradeepkiruvale@gmail.com Cc: openstack-hpc@lists.openstack.org Sent: Wednesday, April 29, 2015 10:33:31 AM Subject: Re: [openstack-hpc] NUMA config
On 29 April 2015 at 18:50, Pradeep Kiruvale
wrote:
I think it creates 2 Virtual NUMA nodes on one single Physical
NUMA node
If you see below cpu 0-1 are in one NUMA node and 2-3 are in the
other.
This is what I can understand from the xml.
<numa> <cell id='0' cpus='0-1' memory='524288'/> <cell id='1' cpus='2-3' memory='524288'/> </numa>
Yeah, but that doesn't seem to be consistent with this from the same
section:
"nova flavor-key m1.numa set hw:numa_nodes=1" According to the spec
(http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/ virt-driver-numa-placement.html):
hw:numa_nodes=NN - numa of NUMA nodes to expose to the guest. Not to mention the numatune config that binds the guest nodes to
host nodes.
An aside: Does anyone else find it strange that all these options are designed to be flavor and/or image extra specs rather than providing
a
mechanism to set them on instance boot (e.g. hints). I think the relationship between flavors or images and such tunables is tenuous
at
best, why should I need multiple versions of what is otherwise the same image or flavor in order to ask for e.g. CPU pinning or Qemu guest agent, these are per instance tunables.
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
-- Cheers, ~Blairo
OpenStack-HPC mailing list OpenStack-HPC@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc