[openstack-dev] [all] Capability Discovery API

Andrew Mann andrew at divvycloud.com
Wed Mar 18 23:56:26 UTC 2015

Here's a possibly relevant use case for this discussion:

1) Running Icehouse OpenStack
2) Keystone reports v3.0 auth capabilities
3) If you actually use the v3.0 auth, then any nova call that gets passed
through to cinder fails due to the code in Icehouse being unable to parse
the 3.0 service catalog format

Due to the limited ability to interrogate OpenStack and determine what is
running, we have to auth with v3, and then make a volume related nova call
and see if it fails. Afterward we can go down code paths to work around the
OS bugs in the presumed version.  If a more robust API for determining the
running components and their capabilities were available, this would be an
easier situation to deal with.

The main point of this is that a capabilities API requires an absolute
flawless implementation to be sufficient. It fails if a capability is
reported as available, but the implementation in that particular release
has a bug. The version of implementation code also needs to be exposed
through the API for consumers to be able to know when issues are present
and work around them.


On Wed, Mar 18, 2015 at 1:38 PM, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:

> On 18 March 2015 at 03:33, Duncan Thomas <duncan.thomas at gmail.com> wrote:
>> On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core) <
>> amos.steven.davis at hp.com> wrote:
>>> Ceph/Cinder:
>>> LVM or other?
>>> SCSI-backed?
>>> Any others?
>> I'm wondering why any of the above matter to an application.
> The Neutron requirements list is the same.  Everything you've listed
> details implementation details with the exception of shared networks (which
> are a core feature, and so it's actually rather unclear what you had in
> mind there).
> Implementation details should be hidden from cloud users - they don't care
> if I'm using ovs/vlan, and they don't care that I change my cloud one day
> to run ovs/vxlan, they only care that I deliver a cloud that will run their
> application - and since I care that I don't break applications when I make
> under the cover changes I will be thinking carefully about that too. I
> think you could develop a feature list, mind, just that you've not managed
> it here.
> For instance: why is an LVM disk different from one on a Netapp when
> you're a cloud application and you always attach a volume via a VM?  Well,
> it basically isn't, unless there are features (like for instance a minimum
> TPS guarantee) that are different between the drivers.  Cinder's even
> stranger here, since you can have multiple backend drivers simultaneously
> and a feature may not be present in all of them.
> Also, in Neutron, the current MTU and VLAN work is intended to expose some
> of those features to the app more than they were previously (e.g. 'can I
> use a large MTU on this network?'), but there are complexities in exposing
> this in advance of running the application.  The MTU size is not easy to
> discover in advance (it varies depending on what sort of network you're
> making), and what MTU you get for a specific network is very dependent on
> the network controller (network controllers can choose to not expose it at
> all, expose it with upper bounds in place, or expose it and try so hard to
> implement what the user requests that it's not immediately obvious whether
> a request will succeed or fail, for instance).  You could say 'you can ask
> for large MTU networks' - that is a straightforward feature - but some apps
> will fail to run if they ask and get declined.
> This is not to say there isn't useful work that could be done here, just
> that there may be some limitations on what is possible.
> --
> Ian.
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Andrew Mann
DivvyCloud Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150318/5f0931e4/attachment.html>

More information about the OpenStack-dev mailing list