<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Tue, Aug 14, 2018 at 9:20 AM Bogdan Dobrelya <<a href="mailto:bdobreli@redhat.com">bdobreli@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 8/13/18 9:47 PM, Giulio Fidente wrote:<br>
> Hello,<br>
> <br>
> I'd like to get some feedback regarding the remaining<br>
> work for the split controlplane spec implementation [1]<br>
> <br>
> Specifically, while for some services like nova-compute it is not<br>
> necessary to update the controlplane nodes after an edge cloud is<br>
> deployed, for other services, like cinder (or glance, probably<br>
> others), it is necessary to do an update of the config files on the<br>
> controlplane when a new edge cloud is deployed.<br>
> <br>
> In fact for services like cinder or glance, which are hosted in the<br>
> controlplane, we need to pull data from the edge clouds (for example<br>
> the newly deployed ceph cluster keyrings and fsid) to configure cinder<br>
> (or glance) with a new backend.<br>
> <br>
> It looks like this demands for some architectural changes to solve the > following two:<br>
> <br>
> - how do we trigger/drive updates of the controlplane nodes after the<br>
> edge cloud is deployed?<br>
<br>
Note, there is also a strict(?) requirement of local management <br>
capabilities for edge clouds temporary disconnected off the central <br>
controlplane. That complicates the updates triggering even more. We'll <br>
need at least a notification-and-triggering system to perform required <br>
state synchronizations, including conflicts resolving. If that's the <br>
case, the architecture changes for TripleO deployment framework are <br>
inevitable AFAICT.<br></blockquote><div><br></div><div>This is another interesting point. I don't mean to disregard it, but want to<br>highlight the issue that Giulio and I (and others, I'm sure) are focused on.<br><br>As a cinder guy, I'll use cinder as an example. Cinder services running in the<br>control plane need to be aware of the storage "backends" deployed at the<br>Edge. So if a split-stack deployment includes edge nodes running a ceph<br>cluster, the cinder services need to be updated to add the ceph cluster as a<br>new cinder backend. So, not only is control plane data needed in order to<br>deploy an additional stack at the edge, data from the edge deployment needs to<br>be fed back into a subsequent stack update in the controlplane. Otherwise,<br>cinder (and other storage services) will have no way of utilizing ceph<br>clusters at the edge.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> <br>
> - how do we scale the controlplane parameters to accomodate for N<br>
> backends of the same type?<br></blockquote><div><br></div><div>Yes, this is also a big problem for me. Currently, TripleO can deploy cinder<br>with multiple heterogeneous backends (e.g. one each of ceph, NFS, Vendor X,<br>Vendor Y, etc.). However, the current THT do not let you deploy multiple<br>instances of the same backend (e.g. more than one ceph). If the goal is to<br>deploy multiple edge nodes consisting of Compute+Ceph, then TripleO will need<br>the ability to deploy multiple homogeneous cinder backends. This requirement<br>will likely apply to glance and manila as well.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> A very rough approach to the latter could be to use jinja to scale up<br>
> the CephClient service so that we can have multiple copies of it in the<br>
> controlplane.<br>
> <br>
> Each instance of CephClient should provide the ceph config file and<br>
> keyring necessary for each cinder (or glance) backend.<br>
> <br>
> Also note that Ceph is only a particular example but we'd need a similar<br>
> workflow for any backend type.<br>
> <br>
> The etherpad for the PTG session [2] touches this, but it'd be good to<br>
> start this conversation before then.<br>
> <br>
> 1.<br>
> <a href="https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html" rel="noreferrer" target="_blank">https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html</a><br>
> <br>
> 2. <a href="https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane</a><br>
> <br>
<br>
<br>
-- <br>
Best regards,<br>
Bogdan Dobrelya,<br>
Irc #bogdando<br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div></div>