On Tue, 2019-09-17 at 09:23 -0400, Eric Harney wrote:
On 9/16/19 6:59 PM, Sean Mooney wrote:
On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote:
Backend/type specific information leaking out of the API dynamically like that is definitely an interoperability problem and as you said it sounds like it's been that way for a long time. The compute servers diagnostics API had a similar problem for a long time and the associated Tempest test for that API was disabled for a long time because the response body was hypervisor specific, so we eventually standardized it in a microversion so it was driver agnostic.
Except this isn't backend specific information that is leaking. It's just reflecting the configuration of the system.
yes and config driven api behavior is also an iterop problem. ideally you should not be able to tell if cinder is abcked by ceph or emc form the api responce at all.
sure you might have a volume type call ceph and another called emc but both should be report capasty in the same field with teh same unit.
ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an ubounded set.
I think you are confusing types vs backends here. In my example, it was called "snapshots_ceph" because there was a type called "ceph". That's an admin choice, not a behavior of the API. or it could have been express in the api with a dedicated type filed and
so you would always have a snapshots filed regardless of the volume type but have a since type filed per quota set that identifed what type it applied too.