[openstack-dev] [Nova] FPGA as a resource
Zhipeng Huang
zhipengh512 at gmail.com
Wed Apr 6 08:12:16 UTC 2016
Hi Roman,
You are actually touching on something we have been working on. There is a
team in OPNFV DPACC project has been working acceleration related topics,
including folks from CMCC, Intel, ARM, Freescale, Huawei. We found out that
in order to have acceleration working under NFV scenrios, other than Nova
and Neutron's support, we also need a standalone service that manage
accelerators itself.
That means we want to treat accelerators, and FPGA being an important part
of it, as a first class resource citizen and we want to be able to do life
cycle management and scheduling on acceleration resources.
Based upon that requirement we started a new project called Nomad [1] on
Jan this year, to serve as an OpenStack service for distributed
acceleration management.
We've just started the project, and currently discussing the first BP [2].
We have a team working on IP-SEC based accelerator mgmt, and would love to
have more people to work on topics like FPGA.
We also have a topic on introducing Nomad accepted in Austin Summit [3].
You are more than welcomed to join the conversation : )
[1] https://wiki.openstack.org/wiki/Nomad
[2] https://review.openstack.org/#/c/284304/
[3]
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=nomad
On Wed, Apr 6, 2016 at 1:34 PM, Roman Dobosz <roman.dobosz at intel.com> wrote:
> On Tue, 5 Apr 2016 13:58:44 +0100
> "Daniel P. Berrange" <berrange at redhat.com> wrote:
>
> > Along similar lines we have proposals to add vGPU support to Nova,
> > where the vGPUs may or may not be exposed using SR-IOV. We also want
> > to be able to on the fly decide whether any physical GPU is assigned
> > entirely to a guest as a full PCI device, or whether we only assign
> > individual "virtual functions" of the GPU. This means that even if
> > the GPU in question does *not* use SR-IOV, we still need to track
> > the GPU and vGPUs in the same way as we track PCI devices, so that
> > we can avoid assigning a vGPU to the guest, if the underlying physical
> > PCI device is already assigned to the guest.
>
> That's correct. I'd like to mention, that FPGAs can be also exposed
> other way than PCI (like in Xeon+FPGA). Not sure if this also applies
> to GPU.
>
> > I can see we will have much the same issue with FPGAs, where we may
> > either want to assign the entire physical PCI device to a guest, or
> > just assign a particular slot in the FPGA to the guest. So even if
> > the FPGA is not using SR-IOV, we need to tie this all into the PCI
> > device tracking code, as we are intending for vGPUs.
> >
> > All in all, I think we probably ought to generalize the PCI device
> > assignment modelling so that we're actually modelling generic
> > hardware devices which may or may not be PCI based, so that we can
> > accurately track the relationships between the devices.
> >
> > With NIC devices we're also seeing a need to expose capabilities
> > against the PCI devices, so that the schedular can be more selective
> > in deciding which particular devices to assign. eg so we can distinguish
> > between NICs which support RDMA and those which don't, or identify NIC
> > with hardware offload features, and so on. I can see this need to
> > associate capabilities with devices is something that will likely
> > be needed for the FPGA scenario, and vGPUs too. So again this points
> > towards more general purpose modelling of assignable hardware devices
> > beyond the limited PCI device modelling we've got today.
> >
> > Looking to the future I think we'll see more usecases for device
> > assignment appearing for other types of device.
> >
> > IOW, I think it would be a mistake to model FPGAs as a distinct
> > object type on their own. Generalization of assignable devices
> > is the way to go
>
> That's why I've bring the topic here on the list, so we can think about
> similar devices which could be generalized into one common accelerator
> type or even think about modeling PCI as such.
>
> > > All of that makes modelling resource extremely complicated, contrary to
> > > CPU resource for example. I'd like to discuss how the goal of having
> > > reprogrammable accelerators in OpenStack can be achieved. Ideally I'd
> > > like to fit it into Jay and Chris work on resource-*.
> > I think you shouldn't look at the FPGAs as being like CPU resource, but
> > rather look at them as a generalization of PCI device asignment.
>
> CPU in this context was only an example of "easy" resource, which
> doesn't need any preparation before VM can use it :)
>
> --
> Cheers,
> Roman Dobosz
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Zhipeng (Howard) Huang
Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng at huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen
(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh at uci.edu
Office: Calit2 Building Room 2402
OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160406/aad49758/attachment.html>
More information about the OpenStack-dev
mailing list