<div dir="ltr">Matt<div><br></div><div>> <span style="font-size:12.8000001907349px">You are right, it probably won't. At that point you are using puppet to work around some fundamental issues in your OpenStack deployment.</span></div><div><br></div><div>Actually, as you know, with Fuel we are shipping our code to people who have their own infrastructure. We do not have any control over that infrastructure and any information about it. So we should expect the worst - that sometimes such issues will happen and we need to take care of them in the best possible way, e.g. someone tripped the wire and then put it back into the switch. And it seems that we can do it right in puppet code instead of making user wait for puppet rerun. <br></div><div><br></div><div>> <span style="font-size:12.8000001907349px">Another one that is a deployment architecture problem. We solved this by configuring the load balancer to direct keystone traffic to a single db node, now we solve it with Fernet tokens. If you have this <br>> specific issue above it's going to manifest in all kinds of strange ways and can even happen to control services like neutron/nova etc as well. Which means even if we get puppet to pass with a bunch of <br>> retries, OpenStack is not healthy and the users will not be happy about it.</span></div><div><br></div><div>Again, what you described is for the case when the system was in some undesirable state like reading from incorrect database and then got into persistent working state. And you solve it by making load balancer aware of which backend to send request to. But I am talking about sporadic failures which from the statistical point of view look negligible and should not be handled by load balancer. Imagine the situation when load balancer is ok with that backend and this backend faces intermittent operational issue like getting garbled response or having some bug in the code. This is a sporadic failure which will not be caught by load balancer because if you make it so sensitive to such issues it will behave poorly. So, I think, the best option here is to handle such issues on application level.<br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Oct 15, 2015 at 4:37 PM, Matt Fischer <span dir="ltr"><<a href="mailto:matt@mattfischer.com" target="_blank">matt@mattfischer.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Thu, Oct 15, 2015 at 4:10 AM, Vladimir Kuklin <span dir="ltr"><<a href="mailto:vkuklin@mirantis.com" target="_blank">vkuklin@mirantis.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Gilles,<div><br></div><div>5xx errors like 503 and 502/504 could always be intermittent operational issues. E.g. when you access your keystone backends through some proxy and there is a connectivity issue between the proxy and backends which disappears in 10 seconds, you do not need to rerun the puppet completely - just retry the request.</div><div><br></div><div>Regarding "<span style="font-size:12.8000001907349px">REST interfaces for all Openstack API" - this is very close to another topic that I raised ([0]) - using native Ruby application and handle the exceptions. Otherwise whenever we have an OpenStack client (generic or neutron/glance/etc. one) sending us a message like '[111] Connection refused' this message is very much determined by the framework that OpenStack is using within this release for clients. It could be `requests` or any other type of framework which sends different text message depending on its version. So it is very bothersome to write a bunch of 'if' clauses or gigantic regexps instead of handling simple Ruby exception. So I agree with you here - we need to work with the API directly. And, by the way, if you also support switching to native Ruby OpenStack API client, please feel free to support movement towards it in the thread [0]</span><br></div><div><br></div><div>Matt and Gilles,</div><div><br></div><div>Regarding puppet-healthcheck - I do not think that puppet-healtcheck handles exactly what I am mentioning here - it is not running exactly at the same time as we run the request.</div><div><br></div><div>E.g. 10 seconds ago everything was OK, then we had a temporary connectivity issue, then everything is ok again in 10 seconds. Could you please describe how puppet-healthcheck can help us solve this problem? </div></div></blockquote><div><br></div><div><br></div></span><div>You are right, it probably won't. At that point you are using puppet to work around some fundamental issues in your OpenStack deployment.</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>Or another example - there was an issue with keystone accessing token database when you have several keystone instances running, or there was some desync between these instances, e.g. you fetched the token at keystone #1 and then you verify it again keystone #2. Keystone #2 had some issues verifying it not due to the fact that token was bad, but due to the fact that that keystone #2 had some issues. We would get 401 error and instead of trying to rerun the puppet we would need just to handle this issue locally by retrying the request.</div><div><br></div><div>[0] <a href="http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66423" target="_blank">http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66423</a></div></div></blockquote><div><br></div></span><div>Another one that is a deployment architecture problem. We solved this by configuring the load balancer to direct keystone traffic to a single db node, now we solve it with Fernet tokens. If you have this specific issue above it's going to manifest in all kinds of strange ways and can even happen to control services like neutron/nova etc as well. Which means even if we get puppet to pass with a bunch of retries, OpenStack is not healthy and the users will not be happy about it.</div><div><br></div><div>I don't want to give them impression that I am completely opposed to retries, but on the other hand, when my deployment is broken, I want to know quickly, not after 10 minutes of retries, so we need to balance that.</div></div></div></div>
<br>__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr">Yours Faithfully,<br>Vladimir Kuklin,<br>Fuel Library Tech Lead,<br>Mirantis, Inc.<br>+7 (495) 640-49-04<br>+7 (926) 702-39-68<br>Skype kuklinvv<br>35bk3, Vorontsovskaya Str.<br>Moscow, Russia,<br><a href="http://www.mirantis.ru/" target="_blank">www.mirantis.com</a><br><a href="http://www.mirantis.ru/" target="_blank">www.mirantis.ru</a><br><a href="mailto:vkuklin@mirantis.com" target="_blank">vkuklin@mirantis.com</a></div></div></div></div>
</div>