[openstack-dev] [cyborg]Dublin Rocky PTG Summary

Zhipeng Huang zhipengh512 at gmail.com
Fri Mar 9 07:46:06 UTC 2018

 Hi Team,

Thanks to our topic leads' efforts, below is the aggregated summary from
our dublin ptg session discussion. Please check it out and feel free to
feedback any concerns you might have.

Queens Cycle Review
1. Adopt MS based release method starting in Rocky to avoid chaos
2. Establish subteams alongside core team that could cover various
important aspects

   - doc team: lead - Li Liu, yumeng

   - release team: lead - howard, zhuli

   - driver team: lead - Shaohe, Dutch

3. Intel might consider setup one for its FPGA card for Cyborg 3rd Party CI
4. Promote Shaohe as the new core reviewer

Quota and Multi-tenancy Support
Etherpad: https://etherpad.openstack.org/p/cyborg-ptg-rocky-quota

1. Provide project and user level quota support
2. Treat all resources as the reserved resource type
3. Add quota engine and quota driver for the quota support
4. Tables: quotas, quota_usage, reservation
5. Transactions operation: reserve, commit, rollback

   - Concerns on rollback

   - Implement a two-stage resevation and rollback

   - reserve - commit - rollback (if failed)

6. Experiment with oslo.limit for quota/nested quota support from Keystone
(maybe slated for MS3)

Programability Support
Li Liu:


1. Security: 2 dimensions: At-rest/In-use,Authentication and/or
encryption.Specific cryptoalgorithms, key lengths and key storage be left
to cloud operators and/orvendors. (could consider interaction with Barbican
which could be used for keymgmt)

   - At-rest (storage): CanGlance handle any authentication/encryption
   algorithm that an implementationwants?

   - In-use: Transfer fromthe repository to compute mode should be
   protected. This means the compute nodeor the FPGA itself is doing the
   decryption. Should the actual auth/decrypt beleft to the vendor driver?

2. Licensing/policies: A cloud operator maywant to set policies on image
usage and enforce licenses. I suggest this beleft to the implementation as
3. Repository: Glance is presumably thedefault. However, some operators
have gone the proprietary way ut may want touse a standardized way in the
future. Do we want to enable a migration path forthese folks to come to
4. Overall flow:
    ComputeNode <-->[IP Policy Engine] <--> IP.Repository
    Cyborg can define a standard API forComputeNode<-->IPPolicyEngine, and
PolicyEngine <--> Repository.
5. A strawman for the API:

   - Request: Acceleratortype, Region type

   - Response: Imageproviding the accelerator type matching the region type

6. What if there is more than one image: Amechanism is needed to pick the
most suitable images based on users' request.Or just return warnings when
there are multiple hits.
7. There is broad consensus (and no objections) to allow for the
possibility of an 'IP Policy Engine' between the compute node and IP
repository (Glance), with well-defined APIs from Cyborg. This is expected
to enable the use cases above.
8. add bitstream_uuid to the kv pair list. This refers to the uuid id
generated during sythesis time.

More Driver Support
1. Dutch will help lead on the Xilinx driver development in Queens cycle
2. Yumeng will confirm with her team about the clock driver motivation
3. Howard will contact NVIDIA team for their driver support

Finishing Up Nova Cyborg Interaction
1. tentatively agreed flow:

   - Cyborg responsble for tracking available FPGA types/hardware and FPGA

   - The flavor will define the FPGA type/hardware, while the
   image/function will be defined on the glance image. The latter can be
   restricted to prevent users providing their own images. It should be
   possible to state the required function/image in the flavor extra specs.

   - It is recommended to add traits for image/function capability for each
   device/region. This may result in a profusion of traits, but that helps
   Placement do more filtering up front. Having more traits scales better than
   having Placement return a large list of hosts which subsequent
   filters/weighers need to handle.

   - Placement used to provide the FPGA type/hardware. This will filter out
   hosts that don't have the required hardware

   - (Optional)Weighers used to attempt to favour hosts whose FPGAs already
   have the required image/function.

   - Once a host has been chosen, the FPGA programming will take place
   synchronously as part of the instance creation (like VIF, storage
   creation). os-acc will define the common interface for how nova can do this

2. Cyborg should get "the resource provider UUID" - which will surely
always resolve to the resource provider - rather than the compute hostname,
which may or may not
3. Cyborg creates the RPs; nova (in the scheduler in the usual way) creates
the allocations.  This (allocations by nova) is for both the during-spawn
and the post-spawn-attach case
4.  A ``os-acc`` lib should be created to provide attach/detach ability for

   - This needs to work for things other than libvirt, please

   - Don't assume guest def is XML

   - Don't assume sysfs exists

   - Don't assume everything is PCI

   - something like os-vif

   - example of Nova glue to os-vif, note that it's not hypervisor specific:


Meta Data Standardization

1. A standarzied set of metadata need to beassociated with bitstream images
2. Utilize image_properties table in Glance
3. Each metadata will be stored as a row inthis table as key-value pair:
column [name] holds the key whereas column[value] holds the value
4. Cyborg will standardize the key-valueconvention as follows::

    |name        |  value(example) | nullable
|description                             |

    |bs-name     | aes-128        | False    | name of
thebitstream                   |
    |bs-uuid     | {uuid}         | False    | The uuid generated
duringsynthesis      |
    |vendor      | Xilinx         | False    | Vendor of
thecard                      |
    |board       | KU115          | False    | Board type for this
bitstream toload    |
    |shell_id    | {uuid}         |  True    | Required shell bs-uuid for
this bitstream|
    |version     | 1.0            | False    | Device
versionnumber                   |
    |driver      | SDX            | False    | Type of driver for
thisbitstream        |
    | driver_ver |  1.0           |  False    | Driver
version                          |
    | driver_path | /path/to/driver|  False    | Where to retrieve the
driverbinary      |
    |topology    | {CLOB}         | False    |
FunctionTopology                       |
    | description | description    |  True
|Description                             |


   - [driver_path] specifies the location of thedriver installation package
   for this bitstream

   - All the drivers related to the bitstreamshould be packaged in a tarball

   - There should be an installation scriptalso packed in this tarball

   - The bitstream metadata will specify wherethis tarball file is located
   and send it to the Cyborg

   - Vendor driver will untar the file and runthe installation script

   - [shell_id] This field is a uuid pointing tothe required shell
   bitstream uuid for loading this user logic bitstream. If itis null, this
   bitstream is a shell bitstream.

   - [topology] This field describes thetopology of function structures
   after the bitstream is loaded on the FPGA. Inparticular, it uses JSON
   format to visualize how physical functions, virtualfunctions are co-related
   to each other.

Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhipeng at huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipengh at uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180309/aeb866ef/attachment.html>

More information about the OpenStack-dev mailing list