[openstack-dev] [TripleO] Tuskar CLI after architecture changes

Ladislav Smola lsmola at redhat.com
Thu Dec 12 09:01:47 UTC 2013

On 12/11/2013 05:41 PM, Jay Dobies wrote:
> > I will take it little side ways. I think we should be asking why 
> have > we needed the tuskar-api. It has done some more complex logic 
> (e.g. > > building a heat template) or storing additional info, not 
> supported > > by the services we use (like rack associations).
> > That is a perfectly fine use-case of introducing tuskar-api.
> > Although now, when everything is shifting to the services 
> themselves, > we don't need tuskar-api for that kind of stuff. Can you 
> please list > what complex operations are left, that should be done in 
> tuskar? I > > think discussing concrete stuff would be best.
> This is a good call to circle back on that I'm not sure of it either. 
> The wireframes I've seen so far largely revolve around node listing 
> and allocation, but I 100% know I'm oversimplifying it and missing 
> something bigger there.
>> Also, as I have been talking with rdopieralsky, there has been some
>> problems in the past, with tuskar doing more steps in one. Like create a
>> rack and register new nodes in the same time. As those have been
>> separate API calls and there is no transaction handling, we should not
>> do this kind of things in the first place. If we have actions that
>> depends on each other, it should go from UI one by one. Otherwise we
>> will be showing messages like, "The rack has not been created, but 5
>> from 8 nodes has been added. We have tried to delete those added nodes,
>> but 2 of the 5 deletions has failed. Please figure this out, then you
>> can run this awesome action that calls multiple dependent APIs without
>> real rollback again." (or something like that, depending on what gets
>> created first)
> This is what I expected to see as the primary argument against it, the 
> lack of a good transactional model for calling the dependent APIs. And 
> it's certainly valid.
> But what you're describing is the exact same problem regardless if you 
> go from the UI or from the Tuskar API. If we're going to do any sort 
> of higher level automation of things for the user that spans APIs, 
> we're going to run into it. The question is if the client(s) handle it 
> or the API. The alternative is to not have the awesome action in the 
> first place, in which case we're not really giving the user as much 
> value as an application.

Well, not necessarily. You can have the whole deployment story in 2 steps.

1. Get the nodes by manually typing MAC addresses(there can be bulk 
adding), or auto discovery.

2. And create a stack via heat. If the hardware was discovered 
correctly. Heat will just magically do this according to template 

This is just enough magic for me. There doesn't have to be a button 'Get 
the hardware and build the Cloud for me please'. If it does, heat 
stack-create can have parameter autodiscover_nodes=True. I think the 
automation should be always done inside one API call, so we can handle 
transactions correctly. Otherwise we are just building it wrong.

But it's just what I think...

Thanks for the feedback.

>> I am not saying we should not have tuskar-api. Just put there things
>> that belongs there, not proxy everything.
> >
>> btw. the real path of the diagram is
>> tuskar-ui <-> tuskarclient <-> tuskar-api <-> heatclient <-> heat-api
>> .....|ironic|etc.
>> My conclusion
>> ------------------
>> I say if it can be tuskar-ui <-> heatclient <-> heat-api, lets keep it
>> that way.
> I'm still fuzzy on what OpenStack means when it says *client. Is that 
> just a bindings library that invokes a remote API or does it also 
> contain the CLI bits?
>> If we realize we are putting some business logic to UI, that needs to be
>> done also to CLI, or we need to store some additional data, that doesn't
>> belong anywhere let's put it in Tuskar-API.
>> Kind Regards,
>> Ladislav
> Thanks for the feedback  :)
>> On 12/11/2013 03:32 PM, Jay Dobies wrote:
>>> Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
>>> new to the project. I only mention it again because it's relevant in
>>> that I missed any of the discussion on why proxying from tuskar API to
>>> other APIs is looked down upon. Jiri and I had been talking yesterday
>>> and he mentioned it to me when I started to ask these same sorts of
>>> questions.
>>> On 12/11/2013 07:33 AM, Jiří Stránský wrote:
>>>> Hi all,
>>>> TL;DR: I believe that "As an infrastructure administrator, Anna 
>>>> wants a
>>>> CLI for managing the deployment providing the same fundamental 
>>>> features
>>>> as UI." With the planned architecture changes (making tuskar-api 
>>>> thinner
>>>> and getting rid of proxying to other services), there's not an obvious
>>>> way to achieve that. We need to figure this out. I present a few 
>>>> options
>>>> and look forward for feedback.
>>>> Previously, we had planned Tuskar arcitecture like this:
>>>> tuskar-ui <-> tuskarclient <-> tuskar-api <-> heat-api|ironic-api|etc.
>>> My biggest concern was that having each client call out to the
>>> individual APIs directly put a lot of knowledge into the clients that
>>> had to be replicated across clients. At the best case, that's simply
>>> knowing where to look for data. But I suspect it's bigger than that
>>> and there are workflows that will be implemented for tuskar needs. If
>>> the tuskar API can't call out to other APIs, that workflow
>>> implementation needs to be done at a higher layer, which means in each
>>> client.
>>> Something I'm going to talk about later in this e-mail but I'll
>>> mention here so that the diagrams sit side-by-side is the potential
>>> for a facade layer that hides away the multiple APIs. Lemme see if I
>>> can do this in ASCII:
>>> tuskar-ui -+               +-tuskar-api
>>>            |               |
>>>            +-client-facade-+-nova-api
>>>            |               |
>>> tuskar-cli-+               +-heat-api
>>> The facade layer runs client-side and contains the business logic that
>>> calls across APIs and adds in the tuskar magic. That keeps the tuskar
>>> API from calling into other APIs* but keeps all of the API call logic
>>> abstracted away from the UX pieces.
>>> * Again, I'm not 100% up to speed with the API discussion, so I'm
>>> going off the assumption that we want to avoid API to API calls. If
>>> that isn't as strict of a design principle as I'm understanding it to
>>> be, then the above picture probably looks kinda silly, so keep in mind
>>> the context I'm going from.
>>> For completeness, my gut reaction was expecting to see something like:
>>> tuskar-ui -+
>>>            |
>>>            +-tuskar-api-+-nova-api
>>>            |            |
>>> tuskar-cli-+            +-heat-api
>>> Where a tuskar client talked to the tuskar API to do tuskar things.
>>> Whatever was needed to do anything tuskar-y was hidden away behind the
>>> tuskar API.
>>>> This meant that the "integration logic" of how to use heat, ironic and
>>>> other services to manage an OpenStack deployment lied within
>>>> *tuskar-api*. This gave us an easy way towards having a CLI - just 
>>>> build
>>>> tuskarclient to wrap abilities of tuskar-api.
>>>> Nowadays we talk about using heat and ironic (and neutron? nova?
>>>> ceilometer?) apis directly from the UI, similarly as Dashboard does.
>>>> But our approach cannot be exactly the same as in Dashboard's case.
>>>> Dashboard is quite a thin wrapper on top of python-...clients, which
>>>> means there's a natural parity between what the Dashboard and the CLIs
>>>> can do.
>>> When you say python- clients, is there a distinction between the CLI
>>> and a bindings library that invokes the server-side APIs? In other
>>> words, the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
>>>> We're not wrapping the APIs directly (if wrapping them directly 
>>>> would be
>>>> sufficient, we could just use Dashboard and not build Tuskar API at
>>>> all). We're building a separate UI because we need *additional 
>>>> logic* on
>>>> top of the APIs. E.g. instead of directly working with Heat templates
>>>> and Heat stacks to deploy overcloud, user will get to pick how many
>>>> control/compute/etc. nodes he wants to have, and we'll take care of 
>>>> Heat
>>>> things behind the scenes. This makes Tuskar UI significantly thicker
>>>> than Dashboard is, and the natural parity between CLI and UI vanishes.
>>>> By having this logic in UI, we're effectively preventing its use from
>>>> CLI. (If i were bold i'd also think about integrating Tuskar with 
>>>> other
>>>> software which would be prevented too if we keep the business logic in
>>>> UI, but i'm not absolutely positive about use cases here).
>>> I see your point about preventing its use from the CLI, but more
>>> disconcerting IMO is that it just doesn't belong in the UI. That sort
>>> of logic, the "Heat things behind the scenes", sounds like the
>>> jurisdiction of the API (if I'm reading into what that entails
>>> correctly).
>>>> Now this raises a question - how do we get CLI reasonably on par with
>>>> abilities of the UI? (Or am i wrong that Anna the infrastructure
>>>> administrator would want that?)
>>> To reiterate my point above, I see the idea of getting the CLI on par,
>>> but I also see it as striving for a cleaner design as well.
>>>> Here are some options i see:
>>>> 1) Make a thicker python-tuskarclient and put the business logic 
>>>> there.
>>>> Make it consume other python-*clients. (This is an unusual approach
>>>> though, i'm not aware of any python-*client that would consume and
>>>> integrate other python-*clients.)
>>> -1 in favor of #3 below (spoiler: I'm -1 to that too, I suppose this
>>> is a -2)
>>>> 2) Make a thicker tuskar-api and put the business logic there. 
>>>> (This is
>>>> the original approach with consuming other services from 
>>>> tuskar-api. The
>>>> feedback on this approach was mostly negative though.)
>>> A tentative +1. Tentative until I dig up the feedback on intra-API
>>> calls and see why it was negative, in which case I may buy into those
>>> arguments too.
>>>> 3) Keep tuskar-api and python-tuskarclient thin, make another library
>>>> sitting between Tuskar UI and all python-***clients. This new project
>>>> would contain the logic of using undercloud services to provide the
>>>> "tuskar experience" it would expose python bindings for Tuskar UI and
>>>> contain a CLI. (Think of it like traditional python-*client but 
>>>> instead
>>>> of consuming a REST API, it would consume other python-*clients. I
>>>> wonder if this is overengineering. We might end up with too many
>>>> projects doing too few things? :) )
>>> This is the sort of thing I was describing with the facade image
>>> above. Rather than beefing up python-tuskarclient, I'd rather we have
>>> a specific logic layer that isn't the CLI nor is it the bindings, but
>>> is specifically for the purposes of coordination across multiple APIs.
>>> That said, I'm -1 to my own facade diagram. I think that should live
>>> service-side in the API.
>>>> 4) Keep python-tuskarclient thin, but build a separate CLI app that
>>>> would provide same integration features as Tuskar UI does. (This would
>>>> lead to code duplication. Depends on the actual amount of logic to
>>>> duplicate if this is bearable or not.)
>>> I don't know the level of logic duplication that would happen, but the
>>> design feels wrong from the start.
>>>> Which of the options you see as best? Did i miss some better 
>>>> option? Am
>>>> i just being crazy and trying to solve a non-issue? Please tell me :)
>>> I'm not saying you're not crazy. I'm saying you're not alone in being
>>> crazy if you are  :)
>>>> Please don't consider the time aspect of this, focus rather on what's
>>>> the right approach, where we want to get eventually. (We might want to
>>>> keep a thick Tuskar UI for Icehouse not to set the hell loose, there
>>>> will be enough refactoring already.)
>>>> Thanks
>>>> Jirka
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list