<div dir="ltr">Hi everyone<div><br></div><div>Actually, we changed a lot in 5.1 HA and there are some changes in 6.0 also. Right now we are using assymetric cluster and use location constraints to control resources. We started using xml diffs as the most reliable and supported approach as it does not depend on pcs/crmsh implementation. Regarding corosync 2.x we are looking forward to moving to it but we did not fit our 6.0 release timeframe. We will surely move to pacemaker plugin and corosync 2.x in 6.1 release as it should fix lots of our problems.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Nov 19, 2014 at 3:47 AM, Andrew Woodward <span dir="ltr"><<a href="mailto:xarses@gmail.com" target="_blank">xarses@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Nov 12, 2014 at 4:10 AM, Aleksandr Didenko<br>
<<a href="mailto:adidenko@mirantis.com">adidenko@mirantis.com</a>> wrote:<br>
</span><span class="">> HI,<br>
><br>
> in order to make sure some critical Haproxy backends are running (like mysql<br>
> or keystone) before proceeding with deployment, we use execs like [1] or<br>
> [2].<br>
<br>
</span>We used to do the API waiting in the puppet resource providers<br>
consuming them [4] which tends to be very effective (unless it never<br>
comes up) as it doesn't care what is in-between the resource and the<br>
API it's attempting to use. This way works for everything except mysql<br>
because other services depend on it.<br>
<span class=""><br>
><br>
> We're currently working on a minor improvements of those execs, but there is<br>
<br>
</span>really, we should not use these execs, they are bad and we need to be<br>
doing a proper response validation like in [4] instead of the just<br>
using the simple (and often wrong) haproxy health check<br>
<span class=""><br>
> another approach - we can replace those execs with puppet resource providers<br>
> and move all the iterations/loops/timeouts logic there. Also we should fail<br>
<br>
</span>yes, this will become the most reliable method. I'm partially still on<br>
the fence of which provider we are modifying. In the service provider,<br>
we could identify the check method (ie http 200 from a specific url)<br>
and the start check, and the provider would block until the check<br>
passes or timeout is reached. (I'm still on the fence of which<br>
provider to do this for haproxy, or each of the openstack API<br>
services. I'm leaning towards each API since this will allow the check<br>
to work regardless of haproxy, and should let it also work with<br>
refresh)<br>
<span class=""><br>
> catalog compilation/run if those resource providers are not able to ensure<br>
> needed Haproxy backends are up and running. Because there is no point to<br>
> proceed with deployment if keystone is not running, for example.<br>
><br>
> If no one objects, I can start implementing this for Fuel-6.1. We can<br>
> address it as a part of pacemaker improvements BP [3] or create a new BP.<br>
<br>
</span>unless we are fixing the problem with pacemaker it should have its own<br>
spec, possibly w/o a blueprint<br>
<span class=""><br>
><br>
> [1]<br>
> <a href="https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/manifests/cluster_ha.pp#L551-L572" target="_blank">https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/manifests/cluster_ha.pp#L551-L572</a><br>
> [2]<br>
> <a href="https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/openstack/manifests/ha/mysqld.pp#L28-L33" target="_blank">https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/openstack/manifests/ha/mysqld.pp#L28-L33</a><br>
> [3] <a href="https://blueprints.launchpad.net/fuel/+spec/pacemaker-improvements" target="_blank">https://blueprints.launchpad.net/fuel/+spec/pacemaker-improvements</a><br>
><br>
> Regards,<br>
> Aleksandr Didenko<br>
><br>
><br>
</span><span class="">> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
<br>
</span>[4] <a href="https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/neutron/lib/puppet/provider/neutron.rb#L83-116" target="_blank">https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/neutron/lib/puppet/provider/neutron.rb#L83-116</a><br>
<div class="HOEnZb"><div class="h5"><br>
--<br>
Andrew<br>
Mirantis<br>
Ceph community<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr">Yours Faithfully,<br>Vladimir Kuklin,<br>Fuel Library Tech Lead,<br>Mirantis, Inc.<br>+7 (495) 640-49-04<br>+7 (926) 702-39-68<br>Skype kuklinvv<br>45bk3, Vorontsovskaya Str.<br>Moscow, Russia,<br><a href="http://www.mirantis.ru/" target="_blank">www.mirantis.com</a><br><a href="http://www.mirantis.ru/" target="_blank">www.mirantis.ru</a><br><a href="mailto:vkuklin@mirantis.com" target="_blank">vkuklin@mirantis.com</a></div></div>
</div>