[openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

Monty Taylor mordred at inaugust.com
Wed Sep 26 23:45:29 UTC 2018


On 09/26/2018 04:33 PM, Dean Troyer wrote:
> On Wed, Sep 26, 2018 at 3:44 PM, Matt Riedemann <mriedemos at gmail.com> wrote:
>> I started documenting the compute API gaps in OSC last release [1]. It's a
>> big gap and needs a lot of work, even for existing CLIs (the cold/live
>> migration CLIs in OSC are a mess, and you can't even boot from volume where
>> nova creates the volume for you). That's also why I put something into the
>> etherpad about the OSC core team even being able to handle an onslaught of
>> changes for a goal like this.
> 
> The OSC core team is very thin, yes, it seems as though companies
> don't like to spend money on client-facing things...I'll be in the
> hall following this thread should anyone want to talk...
> 
> The migration commands are a mess, mostly because I got them wrong to
> start with and we have only tried to patch it up, this is one area I
> think we need to wipe clean and fix properly.  Yay! Major version
> release!
> 
>> I thought the same, and we talked about this at the Austin summit, but OSC
>> is inconsistent about this (you can live migrate a server but you can't
>> evacuate it - there is no CLI for evacuation). It also came up at the Stein
>> PTG with Dean in the nova room giving us some direction. [2] I believe the
>> summary of that discussion was:
> 
>> a) to deal with the core team sprawl, we could move the compute stuff out of
>> python-openstackclient and into an osc-compute plugin (like the
>> osc-placement plugin for the placement service); then we could create a new
>> core team which would have python-openstackclient-core as a superset
> 
> This is not my first choice but is not terrible either...
> 
>> b) Dean suggested that we close the compute API gaps in the SDK first, but
>> that could take a long time as well...but it sounded like we could use the
>> SDK for things that existed in the SDK and use novaclient for things that
>> didn't yet exist in the SDK
> 
> Yup, this can be done in parallel.  The unit of decision for use sdk
> vs use XXXclient lib is per-API call.  If the client lib can use an
> SDK adapter/session it becomes even better.  I think the priority for
> what to address first should be guided by complete gaps in coverage
> and the need for microversion-driven changes.
> 
>> This might be a candidate for one of these multi-release goals that the TC
>> started talking about at the Stein PTG. I could see something like this
>> being a goal for Stein:
>>
>> "Each project owns its own osc-<service_type> plugin for OSC CLIs"
>>
>> That deals with the core team and sprawl issue, especially with stevemar
>> being gone and dtroyer being distracted by shiny x-men bird related things.
>> That also seems relatively manageable for all projects to do in a single
>> release. Having a single-release goal of "close all gaps across all service
>> types" is going to be extremely tough for any older projects that had CLIs
>> before OSC was created (nova/cinder/glance/keystone). For newer projects,
>> like placement, it's not a problem because they never created any other CLI
>> outside of OSC.
> 
> I think the major difficulty here is simply how to migrate users from
> today state to future state in a reasonable manner.  If we could teach
> OSC how to handle the same command being defined in multiple plugins
> properly (hello entrypoints!) it could be much simpler as we could
> start creating the new plugins and switch as the new command
> implementations become available rather than having a hard cutover.
> 
> Or maybe the definition of OSC v4 is as above and we just work at it
> until complete and cut over at the end.

I think that sounds pretty good, actually. We can also put the 'just get 
the sdk Connection' code in.

You mentioned earlier that python-*client that can take an existing ksa 
Adapter as a constructor parameter make this easier - maybe let's put 
that down as a workitem for this? Becuase if we could do that- then we 
know we've got discovery and config working consistently across the 
board no matter if a call is using sdk or python-*client primitives 
under the cover - so everything will respond to env vars and command 
line options and clouds.yaml consistently.

For that to work, a python-*client Client that took an 
keystoneauth1.adapter.Adapter would need to take it as gospel and not do 
further processing of config, otherwise the point is defeated. But it 
should be straightforward to do in most cases, yeah?

>  Note that the current APIs
> that are in-repo (Compute, Identity, Image, Network, Object, Volume)
> are all implemented using the plugin structure, OSC v4 could start as
> the breaking out of those without command changes (except new
> migration commands!) and then the plugins all re-write and update at
> their own tempo.  Dang, did I just deconstruct my project?

Main difference is making sure these new deconstructed plugin teams 
understand the client support lifecycle - which is that we don't drop 
support for old versions of services in OSC (or SDK). It's a shift from 
the support lifecycle and POV of python-*client, but it's important and 
we just need to all be on the same page.

> One thing I don't like about that is we just replace N client libs
> with N (or more) plugins now and the number of things a user must
> install doesn't go down.  I would like to hear from anyone who deals
> with installing OSC if that is still a big deal or should I let go of
> that worry?

I think it's a worry, although not AS big a worry. With independent 
plugin repos and deliverables, there's a pile for pip to install, but 
the plugins are tiny. There is also a new set of packages for the distro 
maintainers- but maybe still not terrible.

It's still better than python-*client because python-*client pulls in so 
many transitive deepens because of all of the different ways the various 
clients decided to solve the world.



More information about the OpenStack-dev mailing list