[openstack-dev] [nova] [neutron] PCI pass-through network support
Jay Pipes
jaypipes at gmail.com
Mon Dec 23 22:36:36 UTC 2013
On 12/17/2013 10:09 AM, Ian Wells wrote:
> Reiterating from the IRC mneeting, largely, so apologies.
>
> Firstly, I disagree that
> https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support is an
> accurate reflection of the current state. It's a very unilateral view,
> largely because the rest of us had been focussing on the google document
> that we've been using for weeks.
>
> Secondly, I totally disagree with this approach. This assumes that
> description of the (cloud-internal, hardware) details of each compute
> node is best done with data stored centrally and driven by an API. I
> don't agree with either of these points.
>
> Firstly, the best place to describe what's available on a compute node
> is in the configuration on the compute node. For instance, I describe
> which interfaces do what in Neutron on the compute node. This is
> because when you're provisioning nodes, that's the moment you know how
> you've attached it to the network and what hardware you've put in it and
> what you intend the hardware to be for - or conversely your deployment
> puppet or chef or whatever knows it, and Razor or MAAS has enumerated
> it, but the activities are equivalent. Storing it centrally distances
> the compute node from its descriptive information for no good purpose
> that I can see and adds the complexity of having to go make remote
> requests just to start up.
>
> Secondly, even if you did store this centrally, it's not clear to me
> that an API is very useful. As far as I can see, the need for an API is
> really the need to manage PCI device flavors. If you want that to be
> API-managed, then the rest of a (rather complex) API cascades from that
> one choice. Most of the things that API lets you change (expressions
> describing PCI devices) are the sort of thing that you set once and only
> revisit when you start - for instance - deploying new hosts in a
> different way.
>
> I at the parallel in Neutron provider networks. They're config driven,
> largely on the compute hosts. Agents know what ports on their machine
> (the hardware tie) are associated with provider networks, by provider
> network name. The controller takes 'neutron net-create ...
> --provider:network 'name'' and uses that to tie a virtual network to the
> provider network definition on each host. What we absolutely don't do
> is have a complex admin API that lets us say 'in host aggregate 4,
> provider network x (which I made earlier) is connected to eth6'.
FWIW, I could not agree more. The Neutron API already suffers from
overcomplexity. There's really no need to make it even more complex than
it already is, especially for a feature that more naturally fits in
configuration data (Puppet/Chef/etc) and isn't something that you would
really ever change for a compute host once set.
Best,
-jay
More information about the OpenStack-dev
mailing list