[Openstack-operators] [openstack-dev] [stable][all] Keeping Juno "alive" for longer.
matt at nycresistor.com
Mon Nov 9 21:27:33 UTC 2015
Another point I forgot, is vendor ecosystem.
Some vendors require major outages in openstack to upgrade their own
stacks. Usually because they are doing something wrong. But it's their
way or the highway. Not really openstack's community at fault, but it is a
very real issue for many deployers.
On Mon, Nov 9, 2015 at 4:25 PM, matt <matt at nycresistor.com> wrote:
> I am not sure this can be completely or even satisfactorily addressed
> by the OpenStack community. A part of the problem is supporting
> environment. As OpenStack advances it relies on advancing underlying
> operating systems and development environments. Example is the shift from
> 12.04 to 14.04 mainline support in ubuntu. Those sorts of changes require
> re-validating the patterns of the host operating systems for all of the
> environment ( whatever they may be -- usually more than one class of chasis
> ). This is true as well for something as simple as a pip requirements
> change. Especially if as many deployers do, they integrate with
> openstack. Or that awesomely undocumented set of changes to rabbitmq
> security defaults. =P Why the hell do we still show examples of rabbitmq
> being setup with the guest account? That should never ever happen. There
> should be BIG RED LETTERS saying don't use guest.
> If there was a way to avoid powering off VMs during an upgrade this
> could greatly reduce the impact of an upgrade. Of course, this ties back
> to that first point. If a machine needs to be re-patterned there's
> probably not much of a way around that. Also, I'm not sure you even should
> be condoning it from a security perspective. However, people will persist
> in the desire to avoid shutting down instances. A whole farm full of
> puppies. Each one standing between you and an upgrade.
> A set and documented set of ( here is how to rolling upgrade your
> cloud ) from release to release would be great. The problem is, not
> everyone works linearly. As discussed earlier, the release time frame for
> many deployers does not match the release schedule of OpenStack. Many
> deployers jump from icehouse to havana skipping juno. or juno to liberty
> skipping kilo. They end up either having to do a double rolling upgrade or
> just migrate vms into a new environment and repattern their former compute
> nodes as new members of the cluster.
> Just a few of the many hurdles I can think of off the top of my head.
> On Mon, Nov 9, 2015 at 4:05 PM, Sean Dague <sean at dague.net> wrote:
>> On 11/09/2015 03:49 PM, Maish Saidel-Keesing wrote:
>> > On 11/09/15 22:06, Tom Cameron wrote:
>> >>> I would not call that the extreme minority.
>> >>> I would say a good percentage of users are on only getting to Juno
>> >> The survey seems to indicate lots of people are on Havana, Icehouse
>> >> and Juno in production. I would love to see the survey ask _why_
>> >> people are on older versions because for many operators I suspect they
>> >> forked when they needed a feature or function that didn't yet exist,
>> >> and they're now stuck in a horrible parallel universe where upstream
>> >> has not only added the missing feature but has also massively improved
>> >> code quality. Meanwhile, they can't spend the person hours on either
>> >> porting their work into the new Big Tent world we live in, or can't
>> >> bare the thought of having to throw away their hard earned tech debt.
>> >> For more on this, see the myth of the "sunken cost".
>> >> If it turns out people really are deploying new clouds with old
>> >> versions on purpose because of a perceived stability benefit, then
>> >> they aren't reading the release schedule pages close enough to see
>> >> that what they're deploying today will be abandoned soon in the
>> >> future. In my _personal_ opinion which has nothing to do with
>> >> Openstack or my employer, this is really poor operational due
>> > I don't think people are deploying old clouds or old versions.
>> > They are just stuck on older versions. Why (as matt said in his reply)
>> > the upgrade process is hell! And when your environment grows past a
>> > certain point if you have have to upgrade say 100 hosts, it can take a
>> > good couple of months to get the quirks fixed and sorted out, and then
>> > you have to start all over again, because the next release just came
>> Can you be more specific about "upgrade process is hell!"? We continue
>> to work on improvements in upgrade testing to block patches that will
>> make life hell for upgrading. Getting a bunch of specifics on bugs that
>> triggered during upgrade by anyone doing it would go a long way in
>> helping us figure out what's the next soft spot to tackle there.
>> But without that data coming back in specifics it's hard to close
>> whatever gap is here, real or perceived.
>> Sean Dague
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-operators