[openstack-dev] [TripleO][Tuskar] Editing Nodes

Jay Dobies jason.dobies at redhat.com
Mon Jan 13 16:18:12 UTC 2014


I'm pulling this particular discussion point out of the Wireframes 
thread so it doesn't get lost in the replies.

= Background =

It started with my first bulletpoint:

- When a role is edited, if it has existing nodes deployed with the old 
version, are the automatically/immediately updated? If not, how do we 
reflect that there's a difference between how the role is currently 
configured and the nodes that were previously created from it?

Replies:

"I would expect any Role change to be applied immediately. If there is 
some change where I want to keep older nodes how they are set up and 
apply new settings only to new added nodes, I would create new Role then."

"We will have to store image metadata in tuskar probably, that would map 
to glance, once the image is generated. I would say we need to store the 
list of the elements and probably the commit hashes (because elements 
can change). Also it should be versioned, as the images in glance will 
be also versioned.
We can't probably store it in the Glance, cause we will first store the 
metadata, then generate image. Right?

Then we could see whether image was created from the metadata and 
whether that image was used in the heat-template. With versions we could 
also see what has changed.

But there was also idea that there will be some generic image, 
containing all services, we would just configure which services to 
start. In that case we would need to version also this.
"

= New Comments =

My comments on this train of thought:

- I'm afraid of the idea of applying changes immediately for the same 
reasons I'm worried about a few other things. Very little of what we do 
will actually finish executing immediately and will instead be long 
running operations. If I edit a few roles in a row, we're looking at a 
lot of outstanding operations executing against other OpenStack pieces 
(namely Heat).

The idea of immediately also suffers from a sort of "Oh shit, that's not 
what I meant" when hitting save. There's no way for the user to review 
what the larger picture is before deciding to make it so.

- Also falling into this category is the image creation. This is not 
something that finishes immediately, so there's a period between when 
the resource category is saved and the new image exists.

If the image is immediately created, what happens if the user tries to 
change the resource category counts while it's still being generated? 
That question applies both if we automatically update existing nodes as 
well as if we don't and the user is just quick moving around the UI.

What do we do with old images from previous configurations of the 
resource category? If we don't clean them up, they can grow out of hand. 
If we automatically delete them when the new one is generated, what 
happens if there is an existing deployment in process and the image is 
deleted while it runs?

We need some sort of task tracking that prevents overlapping operations 
from executing at the same time. Tuskar needs to know what's happening 
instead of simply having the UI fire off into other OpenStack components 
when the user presses a button.

To rehash an earlier argument, this is why I advocate for having the 
business logic in the API itself instead of at the UI. Even if it's just 
a queue to make sure they don't execute concurrently (that's not enough 
IMO, but for example), the server is where that sort of orchestration 
should take place and be able to understand the differences between the 
configured state in Tuskar and the actual deployed state.

I'm off topic a bit though. Rather than talk about how we pull it off, 
I'd like to come to an agreement on what the actual policy should be. My 
concerns focus around the time to create the image and get it into 
Glance where it's available to actually be deployed. When do we bite 
that time off and how do we let the user know it is or isn't ready yet?

- Editing a node is going to run us into versioning complications. So 
far, all we've entertained are ways to map a node back to the resource 
category it was created under. If the configuration of that category 
changes, we have no way of indicating that the node is out of sync.

We could store versioned resource categories in the Tuskar DB and have 
the version information also find its way to the nodes (note: the idea 
is to use the metadata field on a Heat resource to store the res-cat 
information, so including version is possible). I'm less concerned with 
eventual reaping of old versions here since it's just DB data, though we 
still hit the question of when to delete old images.

- For the comment on a generic image with service configuration, the 
first thing that came to mind was the thread on creating images from 
packages [1]. It's not the exact same problem, but see Clint Byrum's 
comments in there about drift. My gut feeling is that having specific 
images for each res-cat will be easier to manage than trying to edit 
what services are running on a node.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023628.html



More information about the OpenStack-dev mailing list