[all][interop][cinder][qa] API changes with/without microversion and Tempest verification of API interoperability
Sean Mooney
smooney at redhat.com
Tue Sep 17 14:43:53 UTC 2019
On Tue, 2019-09-17 at 15:39 +0100, Sean Mooney wrote:
> On Tue, 2019-09-17 at 07:26 -0700, Ghanshyam Mann wrote:
> > ---- On Tue, 17 Sep 2019 06:46:31 -0700 Sean Mooney <smooney at redhat.com> wrote ----
> > > On Tue, 2019-09-17 at 09:23 -0400, Eric Harney wrote:
> > > > On 9/16/19 6:59 PM, Sean Mooney wrote:
> > > > > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote:
> > > > > > >
> > > > > > > Backend/type specific information leaking out of the API dynamically like
> > > > > > > that is definitely an interoperability problem and as you said it sounds
> > > > > > > like it's been that way for a long time. The compute servers diagnostics API
> > > > > > > had a similar problem for a long time and the associated Tempest test for
> > > > > > > that API was disabled for a long time because the response body was
> > > > > > > hypervisor specific, so we eventually standardized it in a microversion so
> > > > > > > it was driver agnostic.
> > > > > > >
> > > > > >
> > > > > > Except this isn't backend specific information that is leaking. It's just
> > > > > > reflecting the configuration of the system.
> > > > >
> > > > > yes and config driven api behavior is also an iterop problem.
> > > > > ideally you should not be able to tell if cinder is abcked by ceph or emc form the
> > > > > api responce at all.
> > > > >
> > > > > sure you might have a volume type call ceph and another called emc but both should be
> > > > > report capasty in the same field with teh same unit.
> > > > >
> > > > > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types
> > > > > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used
> > on
> > the
> > > > > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an
> > > > > ubounded
> > > > > set.
> > > >
> > > > I think you are confusing types vs backends here. In my example, it was
> > > > called "snapshots_ceph" because there was a type called "ceph". That's
> > > > an admin choice, not a behavior of the API.
> > > or it could have been express in the api with a dedicated type filed and
> > >
> > > so you would always have a snapshots filed regardless of the volume type but have a since
> > > type filed per quota set that identifed what type it applied too.
> >
> > IMO, the best way is to make it in an array structure and volume_type specific quotas can be optional items in
> > mandatory 'snapshots' array field.
> > For example:
> >
> > {
> > "quota_set": {
> > .
> > .
> > "snapshots": {
> > "total/project": 10,
> > "ceph": -1,
> > "lvm-thin": -1,
> > "lvmdriver-1": -1,
> > }
> > }
> >
>
> well you can do it that way or invert it
>
> {
> "quota_set": {
> ceph:{snapshot:-1,gigabytpes:100 ...}
> lvm-1:{snapshot:-1,gigabytpes:100 ...}
> lvm-2:{snapshot:-1,gigabytpes:100 ...}
> project:{snapshot:-1,gigabytpes:100 ...}
> ...
> }
> }
>
>
i ment to say i was orginly think of it slitly differently by having a type colume in the
and thje quota_set being a list
{
"quota_set": [
{snapshot:-1,gigabytpes:100 type:"ceph",...}
{snapshot:-1,gigabytpes:100, type:"lvm-1", ...}
{snapshot:-1,gigabytpes:100, type:"lvm-2" ...}
{snapshot:-1,gigabytpes:100, type:"project" ...}
...
]
}
this is my prefered form of the 3 since you can validate the keys and values
eaily with json schema and it maps nicely to a db schema.
> in either case the filed names remain the same with and the type is treated as an opaque sting
> that is decoupled the field names.
>
> > -gmann
> >
> > >
> > >
> > >
> >
>
>
More information about the openstack-discuss
mailing list