<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Update - you can take a look into comments about Mesos maintaining
upgrade process + my response to it that breaks the topic into a <span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z"> list of
OpenStack services (starting with a minimal list that is necessary
to build a working cloud) and their requirements in terms of data
storage.<br>
TL; DR: I still think in some cases we need to land containers on
the same slave after upgrade, please provide your feedback.<br>
<br>
Thanks!<br>
<br>
-marek<br>
</span><br>
---------------------------------<br>
<div class="" id="magicdomid14"><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s">[G.O]
As I remember the spec for Mesos assumes that self-configuring
service will be used. There is another spec for oslo.config to
support remote configuration storages like ZooKeeper, ETCd,
Consul. This approach should simplify an upgrade process as most
of the configuration part will be done automatically by the
service container itself. I think, we need to discuss the ways
how OpenStack service can be upgraded and provide a baseline
standards(requirements) for OpenStack services so that OpenStack
service code will support one or another way for upgrades.
Marathon framewok should support at least two ways of upgrades:(</span><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s url"><a
href="https://mesosphere.github.io/marathon/docs/deployment-design-doc.html%29"><a class="moz-txt-link-freetext" href="https://mesosphere.github.io/marathon/docs/deployment-design-doc.html">https://mesosphere.github.io/marathon/docs/deployment-design-doc.html</a>)</a></span></div>
<div class="" id="magicdomid15"><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s"> 1)
Rolling-Upgrade (Canary)</span></div>
<div class="" id="magicdomid16"><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s"> 2)
Green-Blue (A/B) upgrades</span></div>
<div class="" id="magicdomid17"><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s"> As
an operator I should be able to select the specific version of
container which I want to roll-out to the existing cloud and I
have to be able to do a rolling-back operation in case of
upgrade failure.</span></div>
<div class="ace-line" id="magicdomid65"><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s"> If
we need to use volume based configuration storage then it should
rely on Mesos Volume management (</span><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s url"><a
href="http://mesos.apache.org/documentation/latest/persistent-volume/%29"><a class="moz-txt-link-freetext" href="http://mesos.apache.org/documentation/latest/persistent-volume/">http://mesos.apache.org/documentation/latest/persistent-volume/</a>)</a></span><span
class="author-a-kz69zz71zz69zgoz84zz66zz80zz80z20cz84z5s"> which
is not released yet as I know. Mesos/Marathon should be able to
place upgared container correctly and we should not define any
contstrains for that in the request. We still might use
constraints but for providing more flexible/complex
rolling-upgrade process like upgrading only specific number of
instances at once.</span></div>
<div class="ace-line" id="magicdomid690"><br>
</div>
<div class="ace-line" id="magicdomid1568"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
[M.Z.] I agree in general about Mesos maintaing upgrades but in
some cases it's not about volumes but underlying (host-based)
data (nova, cinder).</span></div>
<div class="ace-line" id="magicdomid693"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
Let's break it down into a list of OpenStack services (starting
with a minimal list that is necessary to build a working cloud)
and their requirements in terms of data storage:</span></div>
<div class="ace-line" id="magicdomid1270"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
1. nova: we need to ensure upgraded nova container is started on
the same slave so it can reconnect to hypervisor and see the
VMs. Not a candidate for Mesos Volumes (MVs).</span></div>
<div class="ace-line" id="magicdomid1273"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
2. cinder: must be on the same host to see block devices it uses
for storage. Not a candidate for MVs.</span></div>
<div class="ace-line" id="magicdomid1277"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
3. mariadb: must be on the same host to see directory it uses
for data. In the future when we'll use gallera perhaps it may be
a group of hosts, not just 1 host. Sounds like a candidate for
MVs.</span></div>
<div class="ace-line" id="magicdomid1038"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
4. keystone: does not care about the host</span></div>
<div class="ace-line" id="magicdomid1282"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
5. neutron: does not care about the host</span></div>
<div class="ace-line" id="magicdomid1922"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
Additional notes:</span></div>
<div class="ace-line" id="magicdomid1290"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
- currently for neutron we mount "/lib/modules" and do modprobe
from the inside of the container to make neutron working - isnt'
this wrong by design? Slave host should be prepared beforehand
and load necessary modules.</span></div>
<div class="ace-line" id="magicdomid1578"><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
- since for now config files are stored on slaves in a
well-known path (this is provisioned by ansible) we can assume
each slave is identical in this regard so we can easily move
services that do not use data storage between slaves</span></div>
<div class="ace-line" id="magicdomid1842">
<ul class="list-indent1">
<li><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
- also I don't see how we can avoid sticking to the same
slave after upgrade (whatever rolling or green/blue) without
MVs for mariadb and for nova & cinder/local disks at
all.</span></li>
</ul>
</div>
<div class="ace-line" id="magicdomid1915">
<ul class="list-indent1">
<li><span
class="author-a-oz84zz77zz84zyz78zgz90zz80znaghz83z5z83z">
- I can see cinder container not being host-dependent as
long as it's not using local disks for storage (but ceph for
example)</span></li>
</ul>
</div>
<br>
<div class="moz-cite-prefix">On 16.12.2015 18:16, Marek Zawadzki
wrote:<br>
</div>
<blockquote cite="mid:56719C76.30002@mirantis.com" type="cite">Hello
all,
<br>
<br>
I described use case and simple tool that I'd like to implement as
a first step in this topic - would you please
<br>
review it and provide me with feedback before I start coding?
<br>
Is the use-case realistic? Is this tool going to be useful given
the features I described? Any other comments?
<br>
<br>
<a class="moz-txt-link-freetext" href="https://etherpad.openstack.org/p/kolla_upgrading_containers">https://etherpad.openstack.org/p/kolla_upgrading_containers</a>
<br>
<br>
Thank you,
<br>
<br>
-marek zawadzki
<br>
<br>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Marek Zawadzki
Mirantis Kolla Team</pre>
</body>
</html>