[TripleO] [ptg] deployment flow

Alex Schultz aschultz at redhat.com
Tue Oct 27 13:30:47 UTC 2020


On Mon, Oct 26, 2020 at 6:32 PM Kanevsky, Arkady
<Arkady.Kanevsky at dell.com> wrote:
>
> Much appreciate Alex.
> As long they can all be called as "openstack overcloud xyz"  that should work if it allows user to control the order of the steps and ensure that previous step completed successfully.

We only really only want to implement `openstack overcloud foo` for
core functionality. The parts for ceph or a 3rd party can be any other
tooling so long as an output is generated for the end user to consume.

> We may loose a bit on possible parallelism.
> But as long as we can provision and deploy multiple nodes in parallel it should be OK.

We shouldn't lose much in terms of parallelism because things like
network application can still happen on all systems at the same times
(if desired). What we get is the ability to troubleshoot these things
out of a maintenance window or prior to attempting actual deployments.
For example it makes life easier for folks who are planning on scale
out the cloud but don't want to run the deployment bits to touch their
existing cloud.  They can prepare and validate the nodes prior to
attempting to add them to the cloud and troubleshoot that outside of
the actual deployment.  We've actually seen that some folks prefer to
handle their host provisioning themselves so this is just taking that
concept and doing a similar thing as part of tripleo.

>
> Thanks,
> Arkady
>
> -----Original Message-----
> From: Alex Schultz <aschultz at redhat.com>
> Sent: Monday, October 26, 2020 3:53 PM
> To: Kanevsky, Arkady
> Cc: openstack-discuss at lists.openstack.org; Roquesalane, Jean-Pierre; Rehault, Gael
> Subject: Re: [TripleO] [ptg] deployment flow
>
>
> [EXTERNAL EMAIL]
>
> On Mon, Oct 26, 2020 at 2:36 PM Kanevsky, Arkady <Arkady.Kanevsky at dell.com> wrote:
> >
> > After today’s PTG meeting want to make sure I have the right understanding of deployment flow.
> >
> > After deploying  undercloud, are these the steps need to be done that are partially outside TripleO (https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/provisioning/baremetal_provision.html ):
> >
> > Ironic discovery to discover all nodes. (networking for ironic must be
> > in place for all nodes) Metalsmith will provision baremetal machines -
> > https://docs.openstack.org/metalsmith/latest/
> > Metalsmith  provisions OS on provisioned nodes
>
> This is the current process as of Victoria with the switch to nova being disabled on the undercloud by default.
>
> > Something(???) optionally provisions Ceph (based  on appropriate
> > roles)
>
> This was the possibly proposed solution based on the fact that we would also like to move network items outside of the deployment process. This would enable us to do the full OS provision + network configuration prior to deploying the OpenStack services which allows other processes to occur after provisioning but before openstack comes into the picture.
>
> > Something(???) optionally provisions other 3rd party services, like
> > switch provisioning (based on appropriate roles)
>
> This is not currently a thing, however could occur if we complete the work to separate the full network configuration of the hosts from the `openstack overcloud deploy` process.
>
> > Metalsmith optionally provisions Glance (may require Ceph or other 3rd
> > party storage) Metalsmith optionally provisions Neutron (may require
> > switch provisioning)
>
> No, this should not be handled by Metalsmith.  Metalsmith would simply handle the OS provisioning and basic network.
>
> > TripleO deploys overcloud
>
> TripleO deploys an overcloud with the configuration provided. By switching to metalsmith, the `openstack overcloud node provision` process produces an output file containing the required host information to the `openstack overcloud deploy` process.  This effectively turns on the pre-deployed method of deploying and removes the concept of server provisioning from the "deployment".  The thought is to continue to push towards this direction for ceph which would be another step that handles the configuration and deployment of ceph which provides TripleO friendly outputs that become an input into the `openstack overcloud deploy` command.  In the ceph world, this is similar to "external ceph".
>
> If there's another 3rd party who would like to do something similar, that may also be viable.  The current idea is to not continue to add more things to tripleo, but leave it to the configuration and management of the openstack (and accompanying services) only.
>
> It used to be:
> `openstack overcloud deploy` did all the things (provisioning, network configuration, service configuration, etc).
>
> We're moving more towards developing specific processes for each of the phases to improve usability, troubleshooting and better integration for external services and allowing for the more complex information of the cloud to be generated for the user rather than having them craft a bunch of custom heat files).
>
> Something like:
> `openstack overcloud node provision` (only does provisioning `opesntack overcloud network provision` (example name, only does networking configurations) `openstack overcloud deploy` (deploys software for OpenStack using outputs from previous two steps)
>
> By splitting these a part into specific steps, this would allow for and optional `ceph-deployer` to be executed prior to `openstack overcloud deploy` meaning it can be completely managed by a different tool and as far as tripleo is concerned, we just get the relevant configuration information as inputs into the deployment process.
> We're attempting to decouple the very tight integration at these stages which has been very hard to integrate with for 3rd parties or even our own services.
>
> Hope that helps.
>
> Thanks,
> -Alex
>
> >
> >
> >
> > Is this the correct flow?
> >
> > Do we still need “openstack overcloud node provision” call or metalsmith already handles it? Or this call uses metalsmith under the cover?
> >
> >
> >
> > Thanks,
> > Arkady
>




More information about the openstack-discuss mailing list