<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 29, 2017 at 2:32 AM, Dan Smith <span dir="ltr"><<a href="mailto:dms@danplanet.com" target="_blank">dms@danplanet.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
In this serie of patches we are generalizing the PCI framework to<br>
handle MDEV devices. We arguing it's a lot of patches but most of them<br>
are small and the logic behind is basically to make it understand two<br>
new fields MDEV_PF and MDEV_VF.<br>
</blockquote>
<br>
That's not really "generalizing the PCI framework to handle MDEV devices" :) More like it's just changing the /pci module to understand a different device management API, but ok.<br>
</blockquote>
<br></span>
Yeah, the series is adding more fields to our PCI structure to allow for more variations in the kinds of things we lump into those tables. This is my primary complaint with this approach, and has been since the topic first came up. I really want to avoid building any more dependency on the existing pci-passthrough mechanisms and focus any new effort on using resource providers for this. The existing pci-passthrough code is almost universally hated, poorly understood and tested, and something we should not be further building upon.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
In this serie of patches we make libvirt driver support, as usually,<br>
return resources and attach devices returned by the pci manager. This<br>
part can be reused for Resource Provider.<br>
</blockquote>
<br>
Perhaps, but the idea behind the resource providers framework is to treat devices as generic things. Placement doesn't need to know about the particular device attachment status.<br>
</blockquote>
<br></span>
I quickly went through the patches and left a few comments. The base work of pulling some of this out of libvirt is there, but it's all focused on the act of populating pci structures from the vgpu information we get from libvirt. That code could be made to instead populate a resource inventory, but that's about the most of the set that looks applicable to the placement-based approach.<span class=""><br>
<br></span></blockquote><div><br></div><div>I'll review them too.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
As mentioned in IRC and the previous ML discussion, my focus is on the nested resource providers work and reviews, along with the other two top-priority scheduler items (move operations and alternate hosts).<br>
<br>
I'll do my best to look at your patch series, but please note it's lower priority than a number of other items.<br>
</blockquote>
<br></span>
FWIW, I'm not really planning to spend any time reviewing it until/unless it is retooled to generate an inventory from the virt driver.<br>
<br>
With the two patches that report vgpus and then create guests with them when asked converted to resource providers, I think that would be enough to have basic vgpu support immediately. No DB migrations, model changes, etc required. After that, helping to get the nested-rps and traits work landed gets us the ability to expose attributes of different types of those vgpus and opens up a lot of possibilities. IMHO, that's work I'm interested in reviewing.<span class=""><br></span></blockquote><div><br></div><div>That's exactly the things I would like to provide for Queens, so operators would have a possibility to have flavors asking for vGPU resources in Queens, even if they couldn't yet ask for a specific VGPU type yet (or asking to be in the same NUMA cell than the CPU). The latter is definitely needing to have nested resource providers, but the former (just having vGPU resource classes provided by the virt driver) is possible for Queens.</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
One thing that would be very useful, Sahid, if you could get with Eric Fried (efried) on IRC and discuss with him the "generic device management" system that was discussed at the PTG. It's likely that the /pci module is going to be overhauled in Rocky and it would be good to have the mdev device management API requirements included in that discussion.<br>
</blockquote>
<br></span>
Definitely this.<span class="HOEnZb"><font color="#888888"><br></font></span></blockquote><div><br></div><div>++</div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888">
<br>
--Dan</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</div></div></blockquote></div><br></div></div>