[openstack-dev] [TripleO] Tuskar CLI after architecture changes

Tzu-Mainn Chen tzumainn at redhat.com
Fri Dec 13 12:23:39 UTC 2013

> On 12.12.2013 17:10, Mark McLoughlin wrote:
> > On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:
> >> Hi all,
> >>
> >> TL;DR: I believe that "As an infrastructure administrator, Anna wants a
> >> CLI for managing the deployment providing the same fundamental features
> >> as UI." With the planned architecture changes (making tuskar-api thinner
> >> and getting rid of proxying to other services), there's not an obvious
> >> way to achieve that. We need to figure this out. I present a few options
> >> and look forward for feedback.
> > ..
> >
> >> 1) Make a thicker python-tuskarclient and put the business logic there.
> >> Make it consume other python-*clients. (This is an unusual approach
> >> though, i'm not aware of any python-*client that would consume and
> >> integrate other python-*clients.)
> >>
> >> 2) Make a thicker tuskar-api and put the business logic there. (This is
> >> the original approach with consuming other services from tuskar-api. The
> >> feedback on this approach was mostly negative though.)
> >
> > FWIW, I think these are the two most plausible options right now.
> >
> > My instinct is that tuskar could be a stateless service which merely
> > contains the business logic between the UI/CLI and the various OpenStack
> > services.
> >
> > That would be a first (i.e. an OpenStack service which doesn't have a
> > DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
> > as a purely client-side library used by the UI/CLI (i.e. 1) as far it
> > can go until we hit actual cases where we need (2).
> For the features that we identified for Icehouse, we probably don't need
> to store any data necessarily. But going forward, it's not entirely
> sure. We had a chat and identified some data that is probably not suited
> for storing in any of the other services (at least in their current state):
> * Roles (like Compute, Controller, Object Storage, Block Storage) - for
> Icehouse we'll have these 4 roles hardcoded. Going forward, it's
> probable that we'll want to let admins define their own roles. (Is there
> an existing OpenStack concept that we could map Roles onto? Something
> similar to using Flavors as hardware profiles? I'm not aware of any.)
> * Links to Flavors to use with the roles - to define on what hardware
> can a particular Role be deployed. For Icehouse we assume homogenous
> hardware.
> * Links to Images for use with the Role/Flavor pairs - we'll have
> hardcoded Image names for those hardcoded Roles in Icehouse. Going
> forward, having multiple undercloud Flavors associated with a Role,
> maybe each [Role-Flavor] pair should have it's own image link defined -
> some hardware types (Flavors) might require special drivers in the image.
> * Overcloud heat template - for Icehouse it's quite possible it might be
> hardcoded as well and we could just just use heat params to set it up,
> though i'm not 100% sure about that. Going forward, assuming dynamic
> Roles, we'll need to generate it.

One more (possible) item to this list: "# of nodes per role in a deployment" -
we'll need this if we want to stage the deployment, although that could
potentially be done on the client-side UI/CLI.

> ^ So all these things could probably be hardcoded for Icehouse, but not
> in the future. Guys suggested that if we'll be storing them eventually
> anyway, we might build these things into Tuskar API right now (and
> return hardcoded values for now, allow modification post-Icehouse). That
> seems ok to me. The other approach of having all this hardcoding
> initially done in a library seems ok to me too.
> I'm not 100% sure that we cannot store some of this info in existing
> APIs, but it didn't seem so to me (to us). We've talked briefly about
> using Swift for it, but looking back on the list i wrote, it doesn't
> seem as very Swift-suited data.
> >
> > One example worth thinking through though - clicking "deploy my
> > overcloud" will generate a Heat template and sent to the Heat API.
> >
> > The Heat template will be fairly closely tied to the overcloud images
> > (i.e. the actual image contents) we're deploying - e.g. the template
> > will have metadata which is specific to what's in the images.
> >
> > With the UI, you can see that working fine - the user is just using a UI
> > that was deployed with the undercloud.
> >
> > With the CLI, it is probably not running on undercloud machines. Perhaps
> > your undercloud was deployed a while ago and you've just installed the
> > latest TripleO client-side CLI from PyPI. With other OpenStack clients
> > we say that newer versions of the CLI should support all/most older
> > versions of the REST APIs.
> >
> > Having the template generation behind a (stateless) REST API could allow
> > us to define an API which expresses "deploy my overcloud" and not have
> > the client so tied to a specific undercloud version.
> Yeah i see that advantage of making it an API, Dean pointed this out
> too. The combination of this and the fact that we'll need to store the
> Roles and related data eventually anyway might be the tipping point.
> Thanks! :)
> Jirka
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list