[openstack-dev] [Fuel] Waiting for Haproxy backends

Vladimir Kuklin vkuklin at mirantis.com
Wed Nov 19 10:01:58 UTC 2014


Hi everyone

Actually, we changed a lot in 5.1 HA and there are some changes in 6.0
also. Right now we are using assymetric cluster and use location
constraints to control resources. We started using xml diffs as the most
reliable and supported approach as it does not depend on pcs/crmsh
implementation. Regarding corosync 2.x we are looking forward to moving to
it but we did not fit our 6.0 release timeframe. We will surely move to
pacemaker plugin and corosync 2.x in 6.1 release as it should fix lots of
our problems.

On Wed, Nov 19, 2014 at 3:47 AM, Andrew Woodward <xarses at gmail.com> wrote:

> On Wed, Nov 12, 2014 at 4:10 AM, Aleksandr Didenko
> <adidenko at mirantis.com> wrote:
> > HI,
> >
> > in order to make sure some critical Haproxy backends are running (like
> mysql
> > or keystone) before proceeding with deployment, we use execs like [1] or
> > [2].
>
> We used to do the API waiting in the puppet resource providers
> consuming them [4] which tends to be very effective (unless it never
> comes up) as it doesn't care what is in-between the resource and the
> API it's attempting to use. This way works for everything except mysql
> because other services depend on it.
>
> >
> > We're currently working on a minor improvements of those execs, but
> there is
>
> really, we should not use these execs, they are bad and we need to be
> doing a proper response validation like in [4] instead of the just
> using the simple (and often wrong) haproxy health check
>
> > another approach - we can replace those execs with puppet resource
> providers
> > and move all the iterations/loops/timeouts logic there. Also we should
> fail
>
> yes, this will become the most reliable method. I'm partially still on
> the fence of which provider we are modifying. In the service provider,
> we could identify the check method (ie http 200 from a specific url)
> and the start check, and the provider would block until the check
> passes or timeout is reached. (I'm still on the fence of which
> provider to do this for haproxy, or each of the openstack API
> services. I'm leaning towards each API since this will allow the check
> to work regardless of haproxy, and should let it also work with
> refresh)
>
> > catalog compilation/run if those resource providers are not able to
> ensure
> > needed Haproxy backends are up and running. Because there is no point to
> > proceed with deployment if keystone is not running, for example.
> >
> > If no one objects, I can start implementing this for Fuel-6.1. We can
> > address it as a part of pacemaker improvements BP [3] or create a new BP.
>
> unless we are fixing the problem with pacemaker it should have its own
> spec, possibly w/o a blueprint
>
> >
> > [1]
> >
> https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/manifests/cluster_ha.pp#L551-L572
> > [2]
> >
> https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/openstack/manifests/ha/mysqld.pp#L28-L33
> > [3] https://blueprints.launchpad.net/fuel/+spec/pacemaker-improvements
> >
> > Regards,
> > Aleksandr Didenko
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> [4]
> https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/neutron/lib/puppet/provider/neutron.rb#L83-116
>
> --
> Andrew
> Mirantis
> Ceph community
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141119/7595a719/attachment.html>


More information about the OpenStack-dev mailing list