<div dir="ltr">On 19 December 2013 15:15, John Garbutt <span dir="ltr"><<a href="mailto:john@johngarbutt.com" target="_blank">john@johngarbutt.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> Note, I don't see the person who boots the server ever seeing the pci-flavor, only understanding the server flavor.<br>
<div class="im">
> [IrenaB] I am not sure that elaborating PCI device request into server flavor is the right approach for the PCI pass-through network case. vNIC by its nature is something dynamic that can be plugged or unplugged after VM boot. server flavor is quite static.<br>
<br>
</div>I was really just meaning the server flavor specify the type of NIC to attach.<br>
<br>
The existing port specs, etc, define how many nics, and you can hot<br>
plug as normal, just the VIF plugger code is told by the server flavor<br>
if it is able to PCI passthrough, and which devices it can pick from.<br>
The idea being combined with the neturon network-id you know what to<br>
plug.<br>
<br>
The more I talk about this approach the more I hate it :(<br>
</blockquote><div><br></div><div>The thinking we had here is that nova would provide a VIF or a physical NIC for each attachment. Precisely what goes on here is a bit up for grabs, but I would think:<br><br></div><div>Nova specifiies the type at port-update, making it obvious to Neutron it's getting a virtual interface or a passthrough NIC (and the type of that NIC, probably, and likely also the path so that Neutron can distinguish between NICs if it needs to know the specific attachment port)<br>
</div><div>Neutron does its magic on the network if it has any to do, like faffing(*) with switches<br></div><div>Neutron selects the VIF/NIC plugging type that Nova should use, and in the case that the NIC is a VF and it wants to set an encap, returns that encap back to Nova<br>
</div><div>Nova plugs it in and sets it up (in libvirt, this is generally in the XML; XenAPI and others are up for grabs).<br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">
> We might also want a "nic-flavor" that tells neutron information it requires, but lets get to that later...<br>
> [IrenaB] nic flavor is definitely something that we need in order to choose if high performance (PCI pass-through) or virtio (i.e. OVS) nic will be created.<br>
<br>
</div>Well, I think its the right way go. Rather than overloading the server<br>
flavor with hints about which PCI devices you could use.<br></blockquote><div><br></div><div>The issue here is that additional attach. Since for passthrough that isn't NICs (like crypto cards) you would almost certainly specify it in the flavor, if you did the same for NICs then you would have a preallocated pool of NICs from which to draw. The flavor is also all you need to know for billing, and the flavor lets you schedule. If you have it on the list of NICs, you have to work out how many physical NICs you need before you schedule (admittedly not hard, but not in keeping) and if you then did a subsequent attach it could fail because you have no more NICs on the machine you scheduled to - and at this point you're kind of stuck.<br>
<br>Also with the former, if you've run out of NICs, the already-extant resize call would allow you to pick a flavor with more NICs and you can then reschedule the subsequent VM to wherever resources are available to fulfil the new request.<br>
<br></div>One question here is whether Neutron should become a provider of billed resources (specifically passthrough NICs) in the same way as Cinder is of volumes - something we'd not discussed to date; we've largely worked on the assumption that NICs are like any other passthrough resource, just one where, once it's allocated out, Neutron can work magic with it.<br>
-- <br></div><div class="gmail_quote">Ian.<br></div></div></div>