[Openstack] [nova-upgrades] Essex Upgrade Plans

John Garbutt John.Garbutt at citrix.com
Tue Dec 13 17:18:51 UTC 2011


Forwarding to the main list, since the sub-team lists are going away.

From: nova-upgrades-bounces+john.garbutt=eu.citrix.com at lists.launchpad.net [mailto:nova-upgrades-bounces+john.garbutt=eu.citrix.com at lists.launchpad.net] On Behalf Of John Garbutt
Sent: 07 December 2011 11:57
To: 'Hookway, Ray'; nova-upgrades at lists.launchpad.net
Subject: Re: [Nova-upgrades] Upgrades

Hi Ray,

No problem, thanks for the feedback. I think we mostly agree. Here are a few more ideas, and some updates on what we have got working at Citrix. Sorry this is such a long email (again).

>From what I remember "upgrades between each milestone release" were suggested as desirable in the design summit, but I certainly agree getting upgrades between major releases is more important. Following on from that, I think hotfixing is also important (the ability to easily add security fixes, etc).

I don't think we will be able to achieve a rolling upgrade form Diablo to Essex, but we can try. I think we should aim at changes to Essex so that Essex->F can be a rolling upgrade. We can possibly look at changing Diablo stable to enable Diablo->Essex, but I would class that as much lower priority. I know our PM suggested that Essex->G would be nice, but let's not run before we can walk.

I guess we need to scope which components we are considering. I would suggest we include all core projects, except swift (I think they have their own story already?). So the list becomes:

·         Keystone

·         Glance

·         Nova

·         Horizon

Aims for the upgrade:

·         No API downtime (or maybe a small period of read-only access / degraded service?)

·         No instance downtime (apart from live migration)
This might not be possible, but should at least be the long term aim. Perhaps we should sart

What about this for a list of scenarios:

·         Hotfix:

o   No change to Rabbit or SQL scheme

o   Consider a security issue in any of the components

·         Hypervisor upgrade

o   we can probably assume some level of (live) migration

o   but we need nova to play nicely with the hypervisor during upgrade

·         RabbitMQ or MySQL upgrade (but no scheme change)

o   Not sure, but maybe RabbitMQ's mirrored queues can help reduce downtime?

·         Full upgrade:

o   Get new functionality and bug fixes

o   New Rabbit message format (possibly)

o   MySQL scheme changes

o   Replace all components

·         Rollback

o   restore their system to its previous state if upgrade fails

·         ?Config changes

o   Not sure this is our problem?

o   but that would cover:

§  Networking mode changes

§  Scheduling config changes

o   I would suggest just creating a new zone to achieve this kind of change, if nova does not support it, at least for the moment

At Citrix we have done some work around the Hotfix scenario (but with a non-rolling upgrade). We were using our virtual appliance style packing (for more details see: http://www.citrix.com/tv/#videos/3837). We now have a Jenkins test that is able to do the following:

·         Install an OpenStack cloud on XenServer using a few instances of our 'OpenStack VPX'

·         Check the system is working (upload and launch an instance)

·         Turn off the old servers

·         Create some new instances of the VPX

·         Migrate the following services onto the new VPXes:

o   Keystone (both), Glance (both), Nova-API, Nova-Scheduler, Nova-Network (in flat mode), Nova-Compute, Dashboard

·         Obviously we have

So key things we have done to make this possible:

·         Use the --host flag to assign a GUID to a worker, so the worker is not tied to the hostname of the machine it is running on

·         We don't use a load balancer in front of the API nodes yet, so we had to hack a few things:

o   glance images have they keystone hostname hardcoded in the DB, so we have to re-write that to point to the new location.

o   Keystone endpoints also need updating, for keystone, nova, glance

·         Whilst not yet tested automatically in our setup, Horizon can be upgraded using Citrix NetScalers graceful shutdown mode. This is where we can send new traffic only to the new machines, and users are given some time to complete their existing sessions on the old servers.

Other problems we can see coming, but have not yet fixed/resolved:

·         Graceful Service shutdown

o   We need to be able to tell all services to stop getting messages from the service queue

o   Then also then need to complete all their currently pending tasks (like wait for the download from Glance to finish)

o   The work on improving the Compute Worker State Machine's resilience, may reduce the impact of not being able to shutdown

o   Ideally we should have config files that trigger the service to correctly pick up the new configuration in a graceful way

·         Ability to disable Compute nodes

o   It would be nice to be able to disable a compute node

o   Then move the VM instances to another host (using the scheduler)

o   In network HA, you will need to nominate a new network node to start forwarding the traffic somehow?

o   Perform the required upgrade procedures (hypervisor upgrade, and OpenStack upgrade)

o   Ability to enable the node again, and let the scheduler start picking it again

·         Volume

o   EwanM has fixed a big around the use of the --host flag (https://bugs.launchpad.net/nova/+bug/898290)

o   iscsi target needs to persist across upgrades

o   hopefully are work on supporting XenServer's storage manager will reduce problem in this area

·         Networking

o   We haven't looked at Quantum yet.

o   Hopefully the network node, when started on a new node, will be able to regenerate it's configuration from a database, not yet sure this works

o   We need a way to configure an alias ip for the network node's gateway address, so it is the same across the upgrade, so you can setup a new machine next to the old network node, and move the traffic across to the new node with minimal downtime

o   CloudStack.org have some funky technology for making this hand over nice and smooth

o   Network HA at least reduces the impact of any network node upgrade, but we need to look at if this is works for VLAN as well as Flat DHCP

·         Database issues

o   The obvious sticking points...

o   How can we have old and new services both accessing either the new or old database scheme?

o   Should all new nodes be able to talk to an old database (horrid code bloat) or should all old nodes be able to talk to the new database (restricts the possible changes to the DB schema)?

o   Is it safe to run the db migrate script while other are also access the database?

·         Message Queue issues

o   Should the rabbit queue names include the message protocol version

o   Should new nodes all be able to talk to old rabbit protocol, while there are still "old nodes" in the cloud?

o   Should the new protocol by backwards compatible to old nodes are able to talk to new nodes without noticing?

o   Should we move to versioning the rabbit protocol in the same way as the REST APIs so that they can interoperate more easily?

·         Deployment advice

o   We need to document the best way to configure the system to enable a graceful upgrade

o   Things like setting the --host flag to a GUID

o   Ensure all API nodes are behind a load balancer for public and private endpoints

o   Use the above addresses to configure keystone, and all the flags like the glance url for nova compute

o   Plus all the other things I have forgotten or not yet stumbled across

Some of the above need policy decisions (Vish?) and tests to ensure they are enforced. Most of them need some code changes / "features" adding into nova.

Thanks,
John

From: nova-upgrades-bounces+john.garbutt=eu.citrix.com at lists.launchpad.net [mailto:nova-upgrades-bounces+john.garbutt=eu.citrix.com at lists.launchpad.net] On Behalf Of Hookway, Ray
Sent: 06 December 2011 17:15
To: nova-upgrades at lists.launchpad.net
Subject: Re: [Nova-upgrades] Upgrades

Some comments on John's message are below in red. John, thanks for giving this a kick!

Here are some further thoughts on how to proceed.


1.       Agree on goals. (The goals below are a great start. Need to capture the overall objective of transparent upgrades.)

2.       Define some upgrade scenarios that are felt to be representative of the kinds of upgrades that we will want to make.

a.       API changes

b.      Changes to the network configuration

c.       Upgrades of system components like RabbitMQ, Database

d.      Hypervisor upgrades

e.      Scheduler changes (possibly including changes to changes to data on which scheduling decisions are based)

3.       Define generic upgrade approach. Should include things like sequencing of  components. Require that future milestones support this approach to upgrade from previously released version.

4.       Message versioning

a.       Include upgrades to RabbitMQ and supporting libraries

5.       Database versioning

a.       Include upgrades to MySQL and sqlalchemy

b.      Include changes to things like caching

6.       Network upgrade paths

7.       Coordination with other development groups

8.       Testing - how do we ensure that upgrades are in fact transparent

I welcome comments on the above. I think it's important to pin down items 1 and 2 in order to evaluate work on the other items. I will draft a "goals" document that we can then discuss it. Goals have been previously discussed in the blueprints mentioned below and at the design summit. My objective is to capture the goals in a single place. I'm also going to start working on item 2. Do we have any volunteers for the other items? If people have been working on, for example, database versioning, what progress have you made? What issues have you encountered?

-Ray Hookway (rjh)

From: nova-upgrades-bounces+ray.hookway=hp.com at lists.launchpad.net [mailto:nova-upgrades-bounces+ray.hookway=hp.com at lists.launchpad.net] On Behalf Of John Garbutt
Sent: Tuesday, November 08, 2011 1:20 PM
To: 'nova-upgrades at lists.launchpad.net'
Subject: [Nova-upgrades] Upgrades

Hi,

I just wanted to introduce the (rough) blueprint I drafted before the summit:
https://blueprints.launchpad.net/nova/+spec/upgrade-with-minimal-downtime

There is also a related blueprint from Matt Dietz:
https://blueprints.launchpad.net/nova/+spec/deployability-improvements

There are a few questions I am wondering about:

·         What are people working on right now? Let's talk to stop any duplicate effort!
Defining upgrade sequence and basic approach. My objectives align with yours below.

·         Are there any meetings scheduled yet?
Not yet - we need to get going.

·         Where are we aiming in the Essex timeframe?
Would like to have a transparent upgrade path from Diablo to Essesx. Will take commitment from other workgroups (e.g., database).

·         What requirements/issues do we need to raise with other working groups? (Database clean-up, etc)
Need to review this on a component by component basis.

To start the discussion, here is an idea of end goal I was imagining in in the blueprint:

·         API endpoints (and dashboard) always available during upgrade

o   Using load balancer graceful shutdown

o   No API messages or tasks lost

·         Minimal loss of instance connectivity

o   Use an IP alias for transparent gateway changes (consider keepalived and conntrackd)

o   New style Network HA to reduce the effected number of VMs

·         Minimal loss of volume connectivity

·         Rolling Upgrades of OpenStack components

o   Different versions can co-exist within a single zone
This is the key to transparent upgrades

§   Glance API versions, Message Queue formats, Database Scheme changes, etc.

o   Upgrade each component and/or host in turn to stop large amounts of downtime

o   Ability to migrate the database scheme with minimal disruption

§  Ideally without having to stop connections to the database

o   Support side-by-side upgrades to try and minimize the downtime - is this different from "Different versions can co-exist"?

·         Transparent Hypervisor Upgrades

o   (where possible) live migrate instances to another hypervisor before upgrade

o   In the worst case, consider suspending instances across upgrade

·         Other upgrades

o   MySQL, RabbitMQ and other supporting systems
Need to determine what it takes to do this. Versioning of messages?

·         Support rolling back to the previous version

·         Support upgrades between each milestone release, and between each major release
Not clear to me that transparent upgrades are needed between milestone releases

·         Gating trunk on the ability to upgrade from the previous milestone and previous release
This is really important. Releases that can't be upgraded transparently are not deployable.

Right now there are quite a few things we need to support all this:

·         Graceful service shutdown

o   Service stops listening to Rabbit queues

o   Service then completes all current work

o   Only then does it stop

o   Prevents getting into an inconsistent state, and minimizes the risk of looking like you have lost a message queue message

o   Alternatively, ensure all the services will recover correctly when they are started again on a new machine

·          Allow different versions of nova-compute, nova-scheduler, glance, swift to co-exist

o   Need to define how the Database Scheme / Database layer can evolve between versions

§  Should we upgrade the database before adding any new components?

§  Should we add all the new components before we upgrade the database?

o   Need to define the Message Queue Message formats, and maybe version them

An interim step could well be to support upgrades where we support different zones using different versions. So during the upgrade you will lose just a zone at a time, and not the whole cloud. Would like to be able to upgrade a single zone transparently.

Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20111213/5730a7ac/attachment.html>


More information about the Openstack mailing list