[openstack-dev] [nova] [neutron] Todays' meeting log: PCI pass-through network support
yongli he
yongli.he at intel.com
Tue Dec 24 01:53:57 UTC 2013
On 2013?12?24? 07:35, Ian Wells wrote:
> On autodiscovery and configuration, we agree that each compute node
> finds out what it has based on some sort of list of match expressions;
> we just disagree on where they should live.
i think what we talk is group/class auto discovery here.
>
> I know we've talked APIs for setting that matching expression, but I
> would prefer that compute nodes are responsible for their own physical
> configuration - generally this seems wiser on the grounds that
> configuring new hardware correctly is a devops problem and this pushes
> the problem into the installer, clear devops territory. It also makes
> the (I think likely) assumption that the config may differ per compute
> node without having to add more complexity to the API with host
> aggregates and so on. And it means that a compute node can start
> working without consulting the central database or reporting its
> entire device list back to the central controller.
let's wait Nova core comments about this.
>
> On PCI groups, I think it is a good idea to have them declared
> centrally (their name, not their content). Now, I would use config to
> define them and maybe an API for the tenant to list their names,
> personally; that's simpler and easier to implement and doesn't
> preclude adding an (admin) API in the future. But I don't imagine the
> list of groups will change frequently so any update API would be very
> infrequently used, and if someone really feels they want to implement
> it I'm not going to stop them.
if you try setup a only a name for the group, how about current pci
alias? We don't need create new terminology for this. and alias can use
to specify groups, but we want kill alias, seems it come back to our
discussion.
>
> On nova boot, I completely agree that we need a new argument to --nic
> to specify the PCI group of the NIC. The rest of the arguments - I'm
> wondering if we could perhaps do this in two stages:
agree.
> 1. Neutron will read those arguments (attachment type, additional
> stuff like port group where relevant) from the port during an attach
> and pass relevant information to the plugging driver in Nova
> 2. We add a feature to nova so that you can specify other properties
> in the --nic section line and they're passed straight to the
> port-create called from within nova.
>
> This is not specific to passthrough at all, just a useful general
> purpose feature. However, it would simplify both the problem and
> design here, because these parameters, whatever they are, are now
> entirely the responsibility of Neutron and Nova's simply transporting
> them into it. A PCI aware Neutron will presumably understand the
> attachment type, the port group and so on, or will reject them if
> they're meaningless to it, and we've even got room for future
> expansion without changing Nova or Neutron, just the plugin. We can
> propose it now and independently, put in a patch and have it ready
> before we need it. I think anything that helps to clarify and divide
> the responsibilities of Nova and Neutron will be helpful, because then
> we don't end up with too many cross-project-interrelated patches.
>
> I'm going to ignore the allocation problem for now. If a single user
> can allocate all the NICs in the cluster to himself, we still have a
> more useful solution than the one now where he can't use them, so it's
> not the top of our list.
>
>
> Time seems to be running out for Icehouse. We need to come to
> agreement ASAP. I will be out from wednesday until after new year.
> I'm thinking that to move it forward after the new year, we may
> need to have the IRC meeting in a daily basis until we
> reach agreement. This should be one of our new year's resolutions?
>
>
> Whatever gets it done.
> --
> Ian.
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131224/3012fd82/attachment.html>
More information about the OpenStack-dev
mailing list