[openstack-dev] [Fuel] Wipe of the nodes' disks

Artur Svechnikov asvechnikov at mirantis.com
Fri Dec 25 07:59:26 UTC 2015


> When do we use the ssh_erase_nodes?

It's used in stop_deployment provision stage [0] and for control reboot [1].

> Is it a fall back mechanism if the mcollective fails?

Yes it's like fall back mechanism, but it's used always [2].

> That might have been a side effect of cobbler and we should test if it's
> still an issue for IBP.

As I can see from the code partition table always is wiped before provision
[3].

[0]
https://github.com/openstack/fuel-astute/blob/master/lib/astute/provision.rb#L387-L396
[1]
https://github.com/openstack/fuel-astute/blob/master/lib/astute/provision.rb#L417-L425
[2]
https://github.com/openstack/fuel-astute/blob/master/lib/astute/provision.rb#L202-L208
[3]
https://github.com/openstack/fuel-agent/blob/master/fuel_agent/manager.py#L194-L197

Best regards,
Svechnikov Artur

On Thu, Dec 24, 2015 at 5:27 PM, Alex Schultz <aschultz at mirantis.com> wrote:

> On Thu, Dec 24, 2015 at 1:29 AM, Artur Svechnikov
> <asvechnikov at mirantis.com> wrote:
> > Hi,
> > We have faced the issue that nodes' disks are wiped after stop
> deployment.
> > It occurs due to the logic of nodes removing (this is old logic and it's
> not
> > actual already as I understand). This logic contains step which calls
> > erase_node[0], also there is another method with wipe of disks [1].
> AFAIK it
> > was needed for smooth cobbler provision and ensure that nodes will not be
> > booted from disk when it shouldn't. Instead of cobbler we use IBP from
> > fuel-agent where current partition table is wiped before provision stage.
> > And use disks wiping for insurance that nodes will not booted from disk
> > doesn't seem good solution. I want to propose not to wipe disks and
> simply
> > unset bootable flag from node disks.
> >
> > Please share your thoughts. Perhaps some other components use the fact
> that
> > disks are wiped after node removing or stop deployment. If it's so, then
> > please tell about it.
> >
> > [0]
> >
> https://github.com/openstack/fuel-astute/blob/master/lib/astute/nodes_remover.rb#L132-L137
> > [1]
> >
> https://github.com/openstack/fuel-astute/blob/master/lib/astute/ssh_actions/ssh_erase_nodes.rb
> >
>
> I thought the erase_node[0] mcollective action was the process that
> cleared a node's disks after their removal from an environment. When
> do we use the ssh_erase_nodes?  Is it a fall back mechanism if the
> mcollective fails?  My understanding on the history is based around
> needing to have the partitions and data wiped so that the LVM groups
> and other partition information does not interfere with the
> installation process the next time the node is provisioned.  That
> might have been a side effect of cobbler and we should test if it's
> still an issue for IBP.
>
>
> Thanks,
> -Alex
>
> [0]
> https://github.com/openstack/fuel-astute/blob/master/mcagents/erase_node.rb
>
> > Best regards,
> > Svechnikov Artur
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151225/7e1814d1/attachment.html>


More information about the OpenStack-dev mailing list