[openstack-dev] [Nova][SR-IOV][pci-passthrough] Reporting pci devices in hypervisor-show

Daniel P. Berrange berrange at redhat.com
Mon Jul 11 09:00:38 UTC 2016


On Fri, Jul 08, 2016 at 03:45:10PM -0400, Jay Pipes wrote:
> On 07/08/2016 12:10 PM, Beliveau, Ludovic wrote:
> > I see a lot of values in having something like this for inventory
> > purposes and troubleshooting.
> > 
> > IMHO the information should be provided in two ways.
> > 
> > 1. Show PCI pools status per compute.  Currently the pools only have
> > information about how many devices are allocated in a pool ("count").
> > We should also derive from the pci_devices db table the number of PCI
> > devices that are available per pool (not just the number of allocated).
> > This information could be included in the hypervisor-show (or a new REST
> > API if this is found to be too noisy).
> > 
> > 2. More detailed information about each individual PCI devices (like you
> > are suggesting: parent device relationships, etc.).  This could be in a
> > separate REST API call.
> > 
> > We could even think about a third option where we could be showing
> > global PCI pools information for a whole region.
> > 
> > For discussions purposes, here's what pci_stats for a compute looks like
> > today:
> > {"count": 1, "numa_node": 0, "vendor_id": "8086", "product_id": "10fb",
> > "tags": {"dev_type": "type-PF", "physical_network": "default"}},
> > "nova_object.namespace": "nova"}
> > {"count": 3, "numa_node": 0, "vendor_id": "8086", "product_id": "10ed",
> > "tags": {"dev_type": "type-VF", "physical_network": "default"}},
> > "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}
> > 
> > Is there an intention to write a blueprint for this feature ?  If there
> > are interests, I don't mind working on it.
> 
> Personally, I hate the PCI device pool code and the whole concept of storing
> this aggregate "pool" information in the database (where it can easily
> become out of sync with the underlying PCI device records).

Yep, I really think we should avoid exposing this concept in our API,
at all costs. Aside from the issue you mentiohn, there's a second
issue that our PCI device code is almost certainly going to have to
be generalized in to host device code, since in order to support
TPMs, vGPUs and FPGAs, we're going to need to start tracking many
host devices which are not PCI. We should bear this in mind when
considered any public API exposure of PCI devices, as we don't want
to add an API that is immediately broken by need to add non-PCI
devices


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



More information about the OpenStack-dev mailing list