<div dir="ltr">Hi Roman,<div><br></div><div>You are actually touching on something we have been working on. There is a team in OPNFV DPACC project has been working acceleration related topics, including folks from CMCC, Intel, ARM, Freescale, Huawei. We found out that in order to have acceleration working under NFV scenrios, other than Nova and Neutron's support, we also need a standalone service that manage accelerators itself.</div><div><br></div><div>That means we want to treat accelerators, and FPGA being an important part of it, as a first class resource citizen and we want to be able to do life cycle management and scheduling on acceleration resources.</div><div><br></div><div>Based upon that requirement we started a new project called Nomad [1] on Jan this year, to serve as an OpenStack service for distributed acceleration management. </div><div><br></div><div>We've just started the project, and currently discussing the first BP [2]. We have a team working on IP-SEC based accelerator mgmt, and would love to have more people to work on topics like FPGA.</div><div><br></div><div>We also have a topic on introducing Nomad accepted in Austin Summit [3].</div><div><br></div><div>You are more than welcomed to join the conversation : )</div><div><br></div><div>[1] <a href="https://wiki.openstack.org/wiki/Nomad" style="font-size:14px">https://wiki.openstack.org/wiki/Nomad</a></div><div>[2] <a href="https://review.openstack.org/#/c/284304/" style="font-size:14px">https://review.openstack.org/#/c/284304/</a></div><div>[3] <a href="https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=nomad">https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=nomad</a> </div><div><br></div><div id="item_1459912160259" class=""><div id="item-body_1459912160259" class="">
<div id="loading_1459912160259" class="" title="重新下载"></div><a id="msg-error_1459912160259" title="重新发送"></a><a id="msg-sending_1459912160259"></a></div></div><div id="item_1459912160258" class=""><div id="item-body_1459912160258" class="">
<div id="loading_1459912160258" class="" title="重新下载"></div><a id="msg-error_1459912160258" title="重新发送"></a><a id="msg-sending_1459912160258"></a></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 6, 2016 at 1:34 PM, Roman Dobosz <span dir="ltr"><<a href="mailto:roman.dobosz@intel.com" target="_blank">roman.dobosz@intel.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Tue, 5 Apr 2016 13:58:44 +0100<br>
<span class="">"Daniel P. Berrange" <<a href="mailto:berrange@redhat.com">berrange@redhat.com</a>> wrote:<br>
<br>
</span><span class="">> Along similar lines we have proposals to add vGPU support to Nova,<br>
> where the vGPUs may or may not be exposed using SR-IOV. We also want<br>
> to be able to on the fly decide whether any physical GPU is assigned<br>
> entirely to a guest as a full PCI device, or whether we only assign<br>
> individual "virtual functions" of the GPU. This means that even if<br>
> the GPU in question does *not* use SR-IOV, we still need to track<br>
> the GPU and vGPUs in the same way as we track PCI devices, so that<br>
> we can avoid assigning a vGPU to the guest, if the underlying physical<br>
> PCI device is already assigned to the guest.<br>
<br>
</span>That's correct. I'd like to mention, that FPGAs can be also exposed<br>
other way than PCI (like in Xeon+FPGA). Not sure if this also applies<br>
to GPU.<br>
<span class=""><br>
> I can see we will have much the same issue with FPGAs, where we may<br>
> either want to assign the entire physical PCI device to a guest, or<br>
> just assign a particular slot in the FPGA to the guest. So even if<br>
> the FPGA is not using SR-IOV, we need to tie this all into the PCI<br>
> device tracking code, as we are intending for vGPUs.<br>
><br>
> All in all, I think we probably ought to generalize the PCI device<br>
> assignment modelling so that we're actually modelling generic<br>
> hardware devices which may or may not be PCI based, so that we can<br>
> accurately track the relationships between the devices.<br>
><br>
> With NIC devices we're also seeing a need to expose capabilities<br>
> against the PCI devices, so that the schedular can be more selective<br>
> in deciding which particular devices to assign. eg so we can distinguish<br>
> between NICs which support RDMA and those which don't, or identify NIC<br>
> with hardware offload features, and so on. I can see this need to<br>
> associate capabilities with devices is something that will likely<br>
> be needed for the FPGA scenario, and vGPUs too. So again this points<br>
> towards more general purpose modelling of assignable hardware devices<br>
> beyond the limited PCI device modelling we've got today.<br>
><br>
> Looking to the future I think we'll see more usecases for device<br>
> assignment appearing for other types of device.<br>
><br>
> IOW, I think it would be a mistake to model FPGAs as a distinct<br>
> object type on their own. Generalization of assignable devices<br>
> is the way to go<br>
<br>
</span>That's why I've bring the topic here on the list, so we can think about<br>
similar devices which could be generalized into one common accelerator<br>
type or even think about modeling PCI as such.<br>
<span class=""><br>
> > All of that makes modelling resource extremely complicated, contrary to<br>
> > CPU resource for example. I'd like to discuss how the goal of having<br>
> > reprogrammable accelerators in OpenStack can be achieved. Ideally I'd<br>
> > like to fit it into Jay and Chris work on resource-*.<br>
> I think you shouldn't look at the FPGAs as being like CPU resource, but<br>
> rather look at them as a generalization of PCI device asignment.<br>
<br>
</span>CPU in this context was only an example of "easy" resource, which<br>
doesn't need any preparation before VM can use it :)<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Cheers,<br>
Roman Dobosz<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Zhipeng (Howard) Huang</div><div dir="ltr"><br></div><div dir="ltr">Standard Engineer</div><div>IT Standard & Patent/IT Prooduct Line</div><div dir="ltr">Huawei Technologies Co,. Ltd</div><div dir="ltr">Email: <a href="mailto:huangzhipeng@huawei.com" target="_blank">huangzhipeng@huawei.com</a></div><div dir="ltr">Office: Huawei Industrial Base, Longgang, Shenzhen</div><div dir="ltr"><br></div><div dir="ltr">(Previous)<br><div>Research Assistant</div><div>Mobile Ad-Hoc Network Lab, Calit2</div><div>University of California, Irvine</div><div>Email: <a href="mailto:zhipengh@uci.edu" target="_blank">zhipengh@uci.edu</a></div><div>Office: Calit2 Building Room 2402</div><div><br></div><div>OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado</div></div></div></div></div></div></div>
</div>