[openstack-dev] [nova] Austin summit feature classification session recap
mriedem at linux.vnet.ibm.com
Sat May 7 01:24:23 UTC 2016
On Thursday morning John Garbutt led a session on feature classification
in Nova. The full etherpad is here .
We've had a concept of this in the Nova devref for awhile .
The goals of the session were to agree on understanding what this was
trying to fix and figure out the plan for working on it.
The point of feature classification is to identify what features in Nova
are incomplete. This can mean they aren't fully tested, documented, etc.
The idea is to communicate to users and operators what works for their
technology choices, e.g. which hypervisor they use, shared vs non-shared
We also want it as a way to identify the gaps in testing and
documentation so we can work on closing those gaps. There are then
levels of completeness applied to a feature or scenario:
* Incomplete, e.g. cells v2
* Experimental, e.g. cells v1
* Complete, e.g. attach a volume to a server instance
* Complete and required, e.g. create and destroy a server instance
* Deprecated, e.g. nova-network
We can also use feature classification as a means to identify things
that need to be deprecated, e.g. agent builds.
We also talked about how best to present this information so it's
understandable to mere mortals.
We have the (hypervisor) feature support matrix already . That's
useful when you're drilling down into the lower level features that each
virt driver (and even architecture for a virt driver like libvirt, for
example) supports, but it's hard to parse from a high level.
So we agreed that for feature classification we'd start out with some
high-level use cases. For example, network function virtualization,
high-performance computing, pets (legacy application workloads) vs
cattle (dev/test) clouds, etc. This is sort of like the architecture
design guide . Then from those use cases we start filling out the
features you'd want for each one and then get into their level of
For Newton, John wants to accomplish the following:
* Get the infrastructure in place for creating the document within Nova,
sort of like what we have for the feature support matrix, i.e. docs
built from an ini/json/yaml file.
* Identify the use case categories, e.g. NFV, HPC, etc.
* Break those down into feature categories, and classifications, based
on the existing hypervisor support matrix and DefCore.
* Then start filling out the table.
John has an example prototype POC here . Note that's built from the
docs job and will probably be gone soon, so I have an image of the table
Future work will include:
* Populating links to existing test results which can be community infra
gate/check jobs/tests or third party CI results.
* Adding Tempest test uuids per feature and then cross referencing the
test uuids to recent test results to automatically calculate if a
feature is working or not.
* Linking to docs for each category.
* Add warning log messages for any big gaps in testing and potentially
propose deprecation for some features unless testing is added.
More information about the OpenStack-dev