[tripleo] Scale up/down Ansible tasks

Oliver Walsh owalsh at redhat.com
Thu Apr 11 09:17:01 UTC 2019


On Thu, 11 Apr 2019 at 08:23, Martin Schuppert <mschuppert at redhat.com>
wrote:

>
>
> On Wed, Apr 10, 2019 at 11:58 PM Emilien Macchi <emilien at redhat.com>
> wrote:
>
>> Hi folks,
>>
>> Today I spent a bit of time on:
>> https://blueprints.launchpad.net/tripleo/+spec/scale-down-tasks
>>
>> Which is basically adding the capability of running Ansible tasks before
>> a node is removed during a scale down or after a scale-up.
>> I'm focusing on the scale-down right now, as I know it's something people
>> have been waiting for (e.g. RHSM unsubscribe, Ceph OSD tear down, Nova
>> Compute, etc).
>>
>> I need inputs from folks now, on what kind of tasks would be needed, I
>> will test them and make sure the interface we provide is enough. John,
>> Olie, and Martin in copy have maybe some ideas, please let me know some
>> examples of Ansible tasks that you folks want to run before a node is
>> deleted in Ironic.
>>
>
> For nova/neutron it would be to disable the service/agent:
>
> (overcloud) $ openstack compute service list
> (overcloud) $ openstack compute service set [hostname] nova-compute
> --disable
>
> (overcloud) $ openstack network agent list
> (overcloud) $ openstack network agent set --disable [openvswitch-agent-id]
>
> After service is stopped/or host delete
> (overcloud) $ openstack compute service delete [service-id]
> (overcloud) $ openstack network agent delete [openvswitch-agent-id]
>
> Regards,
> Martin
>
> [1]
> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/14/html-single/director_installation_and_usage/index#removing-compute-nodes
>

Might be worth confirming there are no instances running on the node too.

Cheers,
Ollie



>
>
>
>>
>> Prototype:
>> https://review.openstack.org/#/q/topic:bp/scale-down-tasks+(status:open+OR+status:merged)
>>
>> Thanks a lot,
>> --
>> Emilien Macchi
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190411/1fbcac18/attachment.html>


More information about the openstack-discuss mailing list