[openstack-dev] [TripleO] our update story: can people live with it?

Clint Byrum clint at fewbar.com
Thu Jan 23 19:20:06 UTC 2014


Excerpts from Angus Thomas's message of 2014-01-23 04:57:20 -0800:
> On 22/01/14 20:54, Clint Byrum wrote:
> >> >
> >> >I don't understand the aversion to using existing, well-known tools to handle this?
> >> >
> > These tools are of course available to users and nobody is stopping them
> > from using them. We are optimizing for not needing them. They are there
> > and we're not going to explode if you use them. You just lose one aspect
> > of what we're aiming at. I believe that having image based deploys will
> > be well received as long as it is simple to understand.
> >
> >> >A hybrid model (blending 2 and 3, above) here I think would work best where
> >> >TripleO lays down a baseline image and the cloud operator would employ an well-known
> >> >and support configuration tool for any small diffs.
> >> >
> > These tools are popular because they control entropy and make it at
> > least more likely that what you tested ends up on the boxes.
> >
> > A read-only root partition is a much stronger control on entropy.
> >
> >> >The operator would then be empowered to make the call for any major upgrades that
> >> >would adversely impact the infrastructure (and ultimately the users/apps).  He/She
> >> >could say, this is a major release, let's deploy the image.
> >> >
> >> >Something logically like this, seems reasonable:
> >> >
> >> >     if (system_change > 10%) {
> >> >       use TripleO;
> >> >       } else {
> >> >       use Existing_Config_Management;
> >> >     }
> >> >
> > I think we can make deploying minor updates minimally invasive.
> >
> > We've kept it simple enough, this should be a fairly straight forward
> > optimization cycle. And the win there is that we also improve things
> > for the 11% change.
> >
> 
> Hi Clint,
> 
> For deploying minimally-invasive minor updates, the idea, if I've 
> understood it correctly, would be to deploy a tarball which replaced 
> selected files on the (usually read-only) root filesystem. That would 
> allow for selective restarting of only the services which are directly 
> affected. The alternative, pushing out a complete root filesystem image, 
> would necessitate the same amount of disruption in all cases.
> 

FTR, I have not said that chopping updates into things like tars of only
the difference between old and new image is a bad idea. Its a great idea
to avoid transfer costs. It just doesn't solve the system state problem,
nor do package updates.

How exactly would we determine which running things have imported
/usr/lib/python2.7/wsgiref/__init__.py ? Assuming we have packages only,
are we going to walk through its reverse dependencies and find any init
scripts/systemd unit files, and restart them? Do we just pray there
are no transitive dependencies or do we get really broad and do a first
level deep just to be sure? This is fine, but it is complexity, and for
what gain? If we make it not disruptive to restart any given service,
we don't have to be complex, and we can spend our energy on other
things.

I don't think a tarball or packages make it any easier to know that we
have deployed the software and ensured there are no stale copies of code
or configuration running. Any optimizations that can be done will need
to be able to resolve system state with the whole software collection or
will have to carry the risk of stale software continuing to run.

I'm suggesting we focus our efforts on straight forward, easy to
understand things: HA and tests.  Those things will have a much greater
benefit on the user than trying to disect diffs and figure out what
things to restart, IMO.

> There are a handful of costs with that approach which concern me: It 
> simplifies the deployment itself, but increases the complexity of 
> preparing the deployment. The administrator is going to have to identify 
> the services which need to be restarted, based on the particular set of 
> libraries which are touched in their partial update, and put together 
> the service restart scripts accordingly.
> 

The cloud is for automation. Said administrator should be writing tests
that ensure they're running the software and configuration that they
think they are. We should also be writing tests for them and shipping
said tests with each and every update, as that is something generally
consumable that will have a massive positive impact on trust in OpenStack.

> We're also making the administrator responsible for managing the 
> sequence in which incremental updates are deployed. Since each 
> incremetal update will re-write a particular set of files, any machine 
> which gets updates 1,2, 3, there's an oversight, and then update 5 is 
> deployed would end up in an odd state, which would require additional 
> tooling to detect. Package based updates, with versioning and dependency 
> tracking on each package, mitigate that risk.
> 

Yes, a carefully constructed set of packages can know all of the
version combinations and they have asserted their dependencies so
post configurations get run at the right times. I'm well aware of what
Debian and RPM packaging are capable of. I'm also well aware that it
is extremely hard to get right, and as the version matrix grows, and
package complexity grows, so do the bug list.

I'm arguing for extreme simplicity, followed by optimization where it
is absolutely necessary. I see a lot of complexity in trying to optimize
system state management.

> Then there's the relationship between the state of running machines, 
> with applied partial updates, and the images which are put onto new 
> machines by Ironic. We would need to apply the partial updates to the 
> images which Ironic writes, or to have the tooling to ensure that newly 
> deployed machines immediately apply the set of applicable partial 
> updates, in sequence.
> 

So this seems to argue for what I am arguing for: Don't do partial
updates. Push out a new image and restart all the processes. If that
fails, this isn't a partial update, it is a failed update and it gets
rolled back. In the future we'll improve Heat a bit so we can do this in
a more graceful way but even as-is right now we can orchestrate this by
just putting wait conditions between each node in the graph and having
our testing system poke the wait conditions with success or fail.

> Solving these issues feels like it'll require quite a lot of additional 
> tooling.
> 

Indeed, Heat needs some work to make the rolling update work. And we
need to make TripleO deploy HA things so that we can update without
downtime. But we have to do those even if we don't do an incremental
update story. And once we do those, getting all pedantic about each
individual service on each individual server is going to seem a lot less
important.



More information about the OpenStack-dev mailing list