[OpenStack-DefCore] Proposal: Maximum Environment Knowledge to Test Minimum Interoperability
John Garbutt
john at johngarbutt.com
Wed Jun 17 16:41:32 UTC 2015
On 17 June 2015 at 15:41, Shamail <itzshamail at gmail.com> wrote:
> Hi Chris,
>
> How often will this maximum set of resources be re-evaluated? For example,
> if DefCore eventually starts testing for volume creation/attachment then a
> new, modified, recommendation may be necessary.
>
> Thanks,
> Shamail
>
>
>
> On Jun 16, 2015, at 8:42 PM, Chris Hoge <chris at openstack.org> wrote:
> DefCore has two technical conditions for a test to be graded as a required
> capability against the DefCore criteria. It needs to access public
> endpoints, and not require administrator credentials.
So I think we should go back to what we want: Better Interoperability
I think monty described this problem (a faced by infra) really well
(look about 15-20 mins in):
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/liberty-product-management-and-openstack-technology
I like the idea of focusing on some key use cases we want to test:
* start my instance from an existing image for Ubuntu 12.04
* upload my image, and boot from that
* boot from my existing boot from volume disk
* upload an object into a new object storage container
Some of this will not be possible right now, and thats sucks, but
looking at things this way helps highlight those problems, and thats
awesome.
We can "flag" the tests that should work but just don't work yet,
maybe even list "missing" tests somehow. Then work with all the
projects to work with their users, so we get to a point where there is
a test to test all the steps of each use case.
> While this helps to
> ensure that the required APIs and capabilities are actually callable by any
> user, it can implicitly place a burden on the end user to have a number of
> resources that may not actually be available to them.
> For example, API tests that check for tenant isolation require users across
> multiple tenants[1]. Tests also may require the implicit existence of
> resources such at tenant networks and machine images if administrator
> access is not available.
I would argue a lot of those tests are not that crucial for interop.
Possibly requesting two users, might be a way to test some of that.
But making that work means re-writing tempest tests.
> Currently the DefCore tests can be run against a
> public cloud without administrator access, but it implicitly requires the
> existence of 1. An OpenStack endpoint. 2. Two guests in seperate tenants. 3.
> Two independent machine images. 4. One public network, or two isolated
> tenant networks. The goal of this proposal is to explicitly define a maximum
> set of resources that can be used to test interoperability as part of the
> DefCore standard. Only tests that use at most the defined resources would be
> graded against the DefCore criteria. Ideally, this maximum set of resources
> would be an OpenStack cloud endpoint and non-admin user credentials to
> access it. However, there are resources that are required to have an
> operating cloud, but may need to be set up either by the provider if admin
> credentials are needed, or by the user beforehand. As previously mentioned,
> two critical resources are a network and a machine image. My list of
> proposed resources is: 1. OpenStack Endpoint: This public API endpoint to
> test against. 2. Guest user credentials: Login credentials for the endpoint.
> 3. Network ID: The ID or name of a network available to the user to attach
> virtual machines to. 4. Image ID: The ID or name of a bootable machine
> image. That list is smaller than the implicit list required by Tempest, and
> represents the most basic resources needed for launching generic and
> portable applications across OpenStack clouds. By testing APIs against this
> standard, we can help to establish a set of common calls that will be used
> as a foundation for portable applications that run on top of OpenStack. One
> of the benefits of this approach would allow users to quickly configure
> Tempest to test their clouds using a tool like the now abandoned TCup, or
> even a web service that can automatically test clouds remotely. The maximum
> resources aren't intended to fully test the correctness of an OpenStack
> cloud. Indeed, one might want to ensure that a cloud is providing tenant
> isolation and resource protection. Nothing precludes this testing, and
> DefCore should continue to encourage collection of and reporting on all API
> test results to identify widely deployed capabilities. In support of
> interoperability, DefCore and Tempest should also map tests to API calls.
> While a DefCore capability tells you what functionlity exists in a cloud, it
> provides no guidance on how to access that functionality. Focusing on tests
> rather than APIs gave us an easy way to bootstrap the testing process, but
> at the expense of obfuscating the path for application developers to know
> which APIs match to capabilities for
> building portable applications. My proposal is to define these resources as
> a future standard and begin by identifying existing tests that meet the
> standard. Then begin to phase out tests that don't meet the standard by
> working with the QA team to write new tests to match the capabilities, and
> drop required capabilites that don't meet the standard. One year should be
> sufficient for a complete and non-disruptive transition. -Chris [1] For
> example, test_create_keypair_in_analt_user_tenant
So this is all interesting, but without thinking about the bigger
picture of the use case, I find it really hard to reason about.
I think it will be emergent from the use cases.
Thanks,
John
More information about the Defcore-committee
mailing list