<div dir="ltr"><div>Irena, have a word with Bob (rkukura on IRC, East coast), he was talking about what would be needed already and should be able to help you. Conveniently he's also core. ;)<br>-- <br></div>Ian.<br></div>
<div class="gmail_extra"><br><br><div class="gmail_quote">On 12 January 2014 22:12, Irena Berezovsky <span dir="ltr"><<a href="mailto:irenab@mellanox.com" target="_blank">irenab@mellanox.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi John,<br>
Thank you for taking an initiative and summing up the work that need to be done to provide PCI pass-through network support.<br>
The only item I think is missing is the neutron support for PCI pass-through. Currently we have Mellanox Plugin that supports PCI pass-through assuming Mellanox Adapter card embedded switch technology. But in order to have fully integrated PCI pass-through networking support for the use cases Robert listed on previous mail, the generic neutron PCI pass-through support is required. This can be enhanced with vendor specific task that may differ (Mellanox Embedded switch vs Cisco 802.1BR), but there is still common part of being PCI aware mechanism driver.<br>
I have already started with definition for this part:<br>
<a href="https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#" target="_blank">https://docs.google.com/document/d/1RfxfXBNB0mD_kH9SamwqPI8ZM-jg797ky_Fze7SakRc/edit#</a><br>
I also plan to start coding soon.<br>
<br>
Depends on how it goes, I can take also nova parts that integrate with neutron APIs from item 3.<br>
<br>
Regards,<br>
Irena<br>
<div class="im"><br>
-----Original Message-----<br>
From: John Garbutt [mailto:<a href="mailto:john@johngarbutt.com">john@johngarbutt.com</a>]<br>
Sent: Friday, January 10, 2014 4:34 PM<br>
To: OpenStack Development Mailing List (not for usage questions)<br>
</div><div class="im">Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support<br>
<br>
</div><div><div class="h5">Apologies for this top post, I just want to move this discussion towards action.<br>
<br>
I am traveling next week so it is unlikely that I can make the meetings. Sorry.<br>
<br>
Can we please agree on some concrete actions, and who will do the coding?<br>
This also means raising new blueprints for each item of work.<br>
I am happy to review and eventually approve those blueprints, if you email me directly.<br>
<br>
Ideas are taken from what we started to agree on, mostly written up here:<br>
<a href="https://wiki.openstack.org/wiki/Meetings/Passthrough#Definitions" target="_blank">https://wiki.openstack.org/wiki/Meetings/Passthrough#Definitions</a><br>
<br>
<br>
What doesn't need doing...<br>
====================<br>
<br>
We have PCI whitelist and PCI alias at the moment, let keep those names the same for now.<br>
I personally prefer PCI-flavor, rather than PCI-alias, but lets discuss any rename separately.<br>
<br>
We seemed happy with the current system (roughly) around GPU passthrough:<br>
nova flavor-key <three_GPU_attached_30GB> set "pci_passthrough:alias"=" large_GPU:1,small_GPU:2"<br>
nova boot --image some_image --flavor <three_GPU_attached_30GB> <some_name><br>
<br>
Again, we seemed happy with the current PCI whitelist.<br>
<br>
Sure, we could optimise the scheduling, but again, please keep that a separate discussion.<br>
Something in the scheduler needs to know how many of each PCI alias are available on each host.<br>
How that information gets there can be change at a later date.<br>
<br>
PCI alias is in config, but its probably better defined using host aggregates, or some custom API.<br>
But lets leave that for now, and discuss it separately.<br>
If the need arrises, we can migrate away from the config.<br>
<br>
<br>
What does need doing...<br>
==================<br>
<br>
1) API & CLI changes for "nic-type", and associated tempest tests<br>
<br>
* Add a user visible "nic-type" so users can express on of several network types.<br>
* We need a default nic-type, for when the user doesn't specify one (might default to SRIOV in some cases)<br>
* We can easily test the case where the default is virtual and the user expresses a preference for virtual<br>
* Above is much better than not testing it at all.<br>
<br>
nova boot --flavor m1.large --image <image_id><br>
--nic net-id=<net-id-1><br>
--nic net-id=<net-id-2>,nic-type=fast<br>
--nic net-id=<net-id-3>,nic-type=fast <vm-name><br>
<br>
or<br>
<br>
neutron port-create<br>
--fixed-ip subnet_id=<subnet-id>,ip_address=192.168.57.101<br>
--nic-type=<slow | fast | foobar><br>
<net-id><br>
nova boot --flavor m1.large --image <image_id> --nic port-id=<port-id><br>
<br>
Where nic-type is just an extra bit metadata string that is passed to nova and the VIF driver.<br>
<br>
<br>
2) Expand PCI alias information<br>
<br>
We need extensions to PCI alias so we can group SRIOV devices better.<br>
<br>
I still think we are yet to agree on a format, but I would suggest this as a starting point:<br>
<br>
{<br>
"name":"GPU_fast",<br>
devices:[<br>
{"vendor_id":"1137","product_id":"0071", address:"*", "attach-type":"direct"},<br>
{"vendor_id":"1137","product_id":"0072", address:"*", "attach-type":"direct"} ],<br>
sriov_info: {}<br>
}<br>
<br>
{<br>
"name":"NIC_fast",<br>
devices:[<br>
{"vendor_id":"1137","product_id":"0071", address:"0:[1-50]:2:*", "attach-type":"macvtap"}<br>
{"vendor_id":"1234","product_id":"0081", address:"*", "attach-type":"direct"} ],<br>
sriov_info: {<br>
"nic_type":"fast",<br>
"network_ids": ["net-id-1", "net-id-2"] } }<br>
<br>
{<br>
"name":"NIC_slower",<br>
devices:[<br>
{"vendor_id":"1137","product_id":"0071", address:"*", "attach-type":"direct"}<br>
{"vendor_id":"1234","product_id":"0081", address:"*", "attach-type":"direct"} ],<br>
sriov_info: {<br>
"nic_type":"fast",<br>
"network_ids": ["*"] # this means could attach to any network } }<br>
<br>
The idea being the VIF driver gets passed this info, when network_info includes a nic that matches.<br>
Any other details, like VLAN id, would come from neutron, and passed to the VIF driver as normal.<br>
<br>
<br>
3) Reading "nic_type" and doing the PCI passthrough of NIC user requests<br>
<br>
Not sure we are agreed on this, but basically:<br>
* network_info contains "nic-type" from neutron<br>
* need to select the correct VIF driver<br>
* need to pass matching PCI alias information to VIF driver<br>
* neutron passes details other details (like VLAN id) as before<br>
* nova gives VIF driver an API that allows it to attach PCI devices that are in the whitelist to the VM being configured<br>
* with all this, the VIF driver can do what it needs to do<br>
* lets keep it simple, and expand it as the need arrises<br>
<br>
4) Make changes to VIF drivers, so the above is implemented<br>
<br>
Depends on (3)<br>
<br>
<br>
<br>
These seems like some good steps to get the basics in place for PCI passthrough networking.<br>
Once its working, we can review it and see if there are things that need to evolve further.<br>
<br>
Does that seem like a workable approach?<br>
Who is willing to implement any of (1), (2) and (3)?<br>
<br>
<br>
Cheers,<br>
John<br>
<br>
<br>
On 9 January 2014 17:47, Ian Wells <<a href="mailto:ijw.ubuntu@cack.org.uk">ijw.ubuntu@cack.org.uk</a>> wrote:<br>
> I think I'm in agreement with all of this. Nice summary, Robert.<br>
><br>
> It may not be where the work ends, but if we could get this done the<br>
> rest is just refinement.<br>
><br>
><br>
> On 9 January 2014 17:49, Robert Li (baoli) <<a href="mailto:baoli@cisco.com">baoli@cisco.com</a>> wrote:<br>
>><br>
>> Hi Folks,<br>
>><br>
>><br>
>> With John joining the IRC, so far, we had a couple of productive<br>
>> meetings in an effort to come to consensus and move forward. Thanks<br>
>> John for doing that, and I appreciate everyone's effort to make it to the daily meeting.<br>
>> Let's reconvene on Monday.<br>
>><br>
>> But before that, and based on our today's conversation on IRC, I'd<br>
>> like to say a few things. I think that first of all, we need to get<br>
>> agreement on the terminologies that we are using so far. With the<br>
>> current nova PCI passthrough<br>
>><br>
>> PCI whitelist: defines all the available PCI passthrough<br>
>> devices on a compute node. pci_passthrough_whitelist=[{<br>
>> "vendor_id":"xxxx","product_id":"xxxx"}]<br>
>> PCI Alias: criteria defined on the controller node with which<br>
>> requested PCI passthrough devices can be selected from all the PCI<br>
>> passthrough devices available in a cloud.<br>
>> Currently it has the following format:<br>
>> pci_alias={"vendor_id":"xxxx", "product_id":"xxxx", "name":"str"}<br>
>><br>
>> nova flavor extra_specs: request for PCI passthrough devices<br>
>> can be specified with extra_specs in the format for<br>
>> example:"pci_passthrough:alias"="name:count"<br>
>><br>
>> As you can see, currently a PCI alias has a name and is defined on<br>
>> the controller. The implications for it is that when matching it<br>
>> against the PCI devices, it has to match the vendor_id and product_id<br>
>> against all the available PCI devices until one is found. The name is<br>
>> only used for reference in the extra_specs. On the other hand, the<br>
>> whitelist is basically the same as the alias without a name.<br>
>><br>
>> What we have discussed so far is based on something called PCI groups<br>
>> (or PCI flavors as Yongli puts it). Without introducing other<br>
>> complexities, and with a little change of the above representation,<br>
>> we will have something<br>
>> like:<br>
>><br>
>> pci_passthrough_whitelist=[{ "vendor_id":"xxxx","product_id":"xxxx",<br>
>> "name":"str"}]<br>
>><br>
>> By doing so, we eliminated the PCI alias. And we call the "name" in<br>
>> above as a PCI group name. You can think of it as combining the<br>
>> definitions of the existing whitelist and PCI alias. And believe it<br>
>> or not, a PCI group is actually a PCI alias. However, with that<br>
>> change of thinking, a lot of benefits can be harvested:<br>
>><br>
>> * the implementation is significantly simplified<br>
>> * provisioning is simplified by eliminating the PCI alias<br>
>> * a compute node only needs to report stats with something like:<br>
>> PCI group name:count. A compute node processes all the PCI<br>
>> passthrough devices against the whitelist, and assign a PCI group<br>
>> based on the whitelist definition.<br>
>> * on the controller, we may only need to define the PCI<br>
>> group names. if we use a nova api to define PCI groups (could be<br>
>> private or public, for example), one potential benefit, among other<br>
>> things (validation, etc), they can be owned by the tenant that<br>
>> creates them. And thus a wholesale of PCI passthrough devices is also possible.<br>
>> * scheduler only works with PCI group names.<br>
>> * request for PCI passthrough device is based on PCI-group<br>
>> * deployers can provision the cloud based on the PCI groups<br>
>> * Particularly for SRIOV, deployers can design SRIOV PCI<br>
>> groups based on network connectivities.<br>
>><br>
>> Further, to support SRIOV, we are saying that PCI group names not<br>
</div></div>>> only can be used in the extra specs, it can also be used in the -nic<br>
<div class="HOEnZb"><div class="h5">>> option and the neutron commands. This allows the most flexibilities<br>
>> and functionalities afforded by SRIOV.<br>
>><br>
>> Further, we are saying that we can define default PCI groups based on<br>
>> the PCI device's class.<br>
>><br>
>> For vnic-type (or nic-type), we are saying that it defines the link<br>
>> characteristics of the nic that is attached to a VM: a nic that's<br>
>> connected to a virtual switch, a nic that is connected to a physical<br>
>> switch, or a nic that is connected to a physical switch, but has a<br>
>> host macvtap device in between. The actual names of the choices are<br>
>> not important here, and can be debated.<br>
>><br>
>> I'm hoping that we can go over the above on Monday. But any comments<br>
>> are welcome by email.<br>
>><br>
>> Thanks,<br>
>> Robert<br>
>><br>
>><br>
>> _______________________________________________<br>
>> OpenStack-dev mailing list<br>
>> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>