[openstack-dev] [Nova] FPGA as a resource

Qiming Teng tengqim at linux.vnet.ibm.com
Wed Apr 6 04:20:37 UTC 2016


Emm... finally this is brought up. We from IBM have already done some
work on FPGA/GPU resource management [1]. Let me bring the SMEs into
this discussion and see if we together can work out a concrete roadmap
to land this upstream.

Fei and Yonghua, this is indeed very interesting a topic for us.


[1] SuperVessel Cloud: https://ptopenlab.com/

Regards,
  Qiming

On Tue, Apr 05, 2016 at 02:27:30PM +0200, Roman Dobosz wrote:
> Hey all,
> 
> On yesterday's scheduler meeting I was raised the idea of bringing up 
> the FPGA to the OpenStack as the resource, which than might be exposed 
> to the VMs.
> 
> The use cases for motivations, why one want do this, are pretty broad -
> having such chip ready on the computes might be beneficial either for
> consumers of the technology and data center administrators. The
> utilization of the hardware is very broad - the only limitations are
> human imagination and hardware capability - since it might be used for
> accelerating execution of algorithms from compression and cryptography,
> through pattern recognition, transcoding to voice/video analysis and
> processing and all the others in between. Using FPGA to perform data
> processing may significantly reduce CPU utilization, the time and power
> consumption, which is a benefit on its own.
> 
> On OpenStack side, unlike utilizing the CPU or memory, for actually 
> using specific algorithm with FPGAs, it has to be programmed first. So 
> in simplified scenario, it might go like this:
> 
> * User selects VM with image which supports acceleration,
> * Scheduler selects the appropriate compute host with FPGA available,
> * Compute gets request, program IP into FPGA and then boot up the 
>   VM with accelerator attached.
> * If VM is removed, it may optionally erase the FPGA.
> 
> As you can see, it seems not complicated at this point, however it 
> become more complex due to following things we also have to take into 
> consideration:
> 
> * recent FPGA are divided into regions (or slots) and every of them 
>   can be programmed separately
> * slots may or may not fit the same bitstream (the program which FPGA
>   is fed, the IP)
> * There is several products around (Altera, Xilinx, others), which make 
>   bitstream incompatible. Even between the products of the same company
> * libraries which abstract the hardware layer like AAL[1] and their 
>   versions
> * for some products, there is a need for tracking memory usage, which 
>   is located on PCI boards
> * some of the FPGAs can be exposed using SR-IOV, while some other not, 
>   which implies multiple usage abilities
> 
> In other words, it may be necessary to incorporate another actions:
> 
> * properly discover FPGA and its capabilities
> * schedule right bitstream with corresponding matching unoccupied FPGA
>   device/slot
> * actual program FPGA
> * provide libraries into VM, which are necessary for interacting between
>   user program and the exposed FPGA (or AAL) (this may be optional, 
>   since user can upload complete image with everything in place)
> * bitstream images have to be keep in some kind of service (Glance?) 
>   with some kind of way for identifying which image match what FPGA
> 
> All of that makes modelling resource extremely complicated, contrary to 
> CPU resource for example. I'd like to discuss how the goal of having 
> reprogrammable accelerators in OpenStack can be achieved. Ideally I'd 
> like to fit it into Jay and Chris work on resource-*.
> 
> Looking forward for any comments :)
> 
> [1] http://www.intel.com/content/dam/doc/white-paper/quickassist-technology-aal-white-paper.pdf
> 
> -- 
> Cheers,
> Roman Dobosz
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




More information about the OpenStack-dev mailing list