<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, May 11, 2021 at 5:29 PM Clark Boylan <<a href="mailto:cboylan@sapwetik.org">cboylan@sapwetik.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Tue, May 11, 2021, at 6:56 AM, Jeremy Stanley wrote:<br>
> On 2021-05-11 09:47:45 +0200 (+0200), Sylvain Bauza wrote:<br>
> [...]<br>
> > Could we be discussing how we could try to find a workaround for<br>
> > this?<br>
> [...]<br>
> <br>
<br>
snip. What Fungi said is great. I just wanted to add a bit of detail below.<br>
<br>
> Upgrading the existing systems at this point is probably at least<br>
> the same amount of work, given all the moving parts, the need to<br>
> completely redo the current configuration management for it, the<br>
> recent license strangeness with Elasticsearch, the fact that<br>
> Logstash and Kibana are increasingly open-core fighting to keep<br>
> useful features exclusively for their paying users... the whole<br>
> stack needs to be reevaluated, and new components and architecture<br>
> considered.<br>
<br>
To add a bit more concrete info to this the current config management for all of this is Puppet. We no longer have the ability to run Puppet in our infrastructure on systems beyond Ubuntu Xenial. What we have been doing for newer systems is using Ansible (often coupled with docker + docker-compose) to deploy services. This means that all of the config management needs to be redone.<br>
<br>
The next problem you'll face is that Elasticsearch itself needs to be upgraded. Historically when we have done this, it has required also upgrading Kibana and Logstash due to compatibility problems. When you upgrade Kibana you have to sort out all of the data access and authorizations problems that Elasticsearch presents because it doesn't provide authentication and authorization (we cannot allow arbitrary writes into the ES cluster, Kibana assumes it can do this). With Logstash you end up rewriting all of your rules.<br>
<br>
Finally, I don't think we have enough room to do rolling replacements of Elasticsearch cluster members as they are so large. We have to delete servers to add servers. Typically we would add server, rotate in, then delete the old one. In this case the idea is probably to spin up an entirely new cluster along side the old one, check that it is functional, then shift the data streaming over to point at it. Unfortunately, that won't be possible.<br>
<br>
> -- <br>
> Jeremy Stanley<br>
<br>
<br></blockquote><div><br></div><div>First, thanks both Jeremy and fungi for explaining why we need to stop to provide a ELK environment for our logs. I now understand it better and honestly I can't really find a way to fix it just by me.</div><div>I'm just sad we can't for the moment find a way to have a way to continue looking at this unless finding "someone" who would help us :-)</div><div><br></div><div>Just a note, I then also guess that <a href="http://status.openstack.org/elastic-recheck/">http://status.openstack.org/elastic-recheck/</a> will stop to work as well, right?</div><div><br></div><div>Operators, if you read me and want to make sure that our upstream CI continues to work as we could see gate issues, please help us ! :-) <br></div><div><br></div><div><br></div></div></div>