[OpenStack-DefCore] On Requiring Vendors to Submit Testbed Data

Mark Voelker mvoelker at vmware.com
Tue Aug 25 13:33:48 UTC 2015


Hello DefCore,

At the DefCore midcycle sprint in Austin a few weeks ago we kicked off a discussion about requiring vendors to submit information about their testbeds along with their test results when applying for OpenStack Powered (TM) logo status in order to help users understand product requirements for deployable products and available resources on hosted products.  The proposed change is here:

https://review.openstack.org/#/c/207209/

We’ve since discussed it a few more times during weekly meetings, including here:

http://eavesdrop.openstack.org/meetings/defcore/2015/defcore.2015-08-12-14.59.log.html#l-108

I had an AI to bring this up on the mailing list again since we don’t seem to be coming to consensus yet.  To summarize, the patch states:

"The objective is for users to understand the requirements for deployable products and to know the resources available on hosted products."

It requires vendors to submit information about the storage, virtualization, and network backends, operating systems, number of nodes in the system, network requirements, a “yes/no” as to whether the configuration is “highly available”, whether or not IPv4 and IPv6 are supported, and the Tempest and policy configurations of the system against which the refstack-client was run.

The major objections to the proposal here boil down mostly to two things (see review comments and IRC discussion above for more):

1.)  Most of the information being requested has little to do with interoperability from an application deployment point of view, and therefore seems out of place.
2.)  The information requested paints an extremely incomplete picture of both product requirements and resources available and therefore fails to accomplish the stated objective.  

The latter point is probably the most salient, so I’ll expand on it a bit (again, see links above for fuller comments):

2a.)  Many deployable products can be run on a variety of different backends and OS's.  Some have different high availability modes, most have more than one reference architectures (which have different minimum requirements and some of which are targeted at specific use cases such as NFV or different environments, e.g. small dev vs large prod).  Many permit the use of different backing technologies in different parts of the cloud (for example,I might have one host aggregate backed by local disk and another backed by a SAN, or I might have one region of KVM and another backed by VMware).  The environment that a single test run was conducted against will at best capture one possible permutation of factors like these.  

2b.)  We have always advised in the past that the tests not be run against a production system [1][2] on account of Tempest leaving artifacts in the environment after it finishes.  For public cloud or hosted products in particular, this means that the sizing information accompanying submitted test results is very unlikely to reflect “resources available for hosted products”.  Public cloud providers may also be reticent to give out sizing information for production anyway as it may be considered sensitive/strategic information, and probably changes very frequently anyway (e.g. if a public cloud starts running low on compute capacity, they’re probably going to simply buy more servers).  Here again, some hosted products are customized based on the needs of the customer as well, so sizing is an arbitrary target: the amount of compute/storage/etc I have available may depend on the limits of my credit card.

2c.) The set of information requested here is also probably incomplete with respect to the requirements of many products.  For example, a vendor might require a version of the backend hypevisor no lesser than X or greater than Y, or may require a particular type of hardware, or may require a particular other piece of configuration management software.  Generally speaking, most vendors have data sheets or HCL’s that provide exactly this type of information for prospective customers—the results of a single test run are unlikely to duplicate them completely and may simply create “multiple sources of truth”, one of which is likely incomplete.  Putting my vendor hat on for a moment, I’d hate for the information like this to misrepresent my product’s capabilities.  Putting my end user hat on, I’d hate to think I understood product requirements after reading the data, make plans, and then find out that I’ve been given a false impression.

So, all that said, several folks believe that collection of the information required by this patch doesn’t really help us meet the stated objective of the patch.  One idea that has been floated as a potentially better way to help customers understand requirements and resources available is to use the OpenStack Marketplace: for example, when a vendor applies to have a listing on the Marketplace, they might be required to submit product requirement information then, or links to existing datasheets/product requirements documents.  This strategy would prevent us from being limited to seeing just the particulars of a single test run, and would have the additional benefit that it could potentially apply to all vendors with a Marketplace listing rather than just OpenStack Powered (TM) products.  I’ll note that the Marketplace is sort of outside the scope of the DefCore Committee (though vendors who acquire OpenStack Powered (TM) status by adhering to our Guidelines do get a special recognition there).  I think that’s actually an advantage as well since as per #1 above, a lot of product requirements and capacity information have little to do with application-level interoperability anyway.  I’d prefer that DefCore focus on those concerns first, as we have a lot of work yet to do there.

[1] http://git.openstack.org/cgit/openstack/defcore/tree/2015.04/procedure.rst#n77
[2] http://git.openstack.org/cgit/openstack/defcore/tree/2015.05/procedure.rst#n77

At Your Service,

Mark T. Voelker





More information about the Defcore-committee mailing list