I guess that the original idea behind the wipe were security reasons so the decommissioned node didn't contain any information of the cloud, including configuration files and such. -- Best regards, Oleg Gelbukh On Thu, Dec 24, 2015 at 11:29 AM, Artur Svechnikov <asvechnikov at mirantis.com > wrote: > Hi, > We have faced the issue that nodes' disks are wiped after stop deployment. > It occurs due to the logic of nodes removing (this is old logic and it's > not actual already as I understand). This logic contains step which calls > erase_node[0], also there is another method with wipe of disks [1]. AFAIK > it was needed for smooth cobbler provision and ensure that nodes will not > be booted from disk when it shouldn't. Instead of cobbler we use IBP from > fuel-agent where current partition table is wiped before provision stage. > And use disks wiping for insurance that nodes will not booted from disk > doesn't seem good solution. I want to propose not to wipe disks and simply > unset bootable flag from node disks. > > Please share your thoughts. Perhaps some other components use the fact > that disks are wiped after node removing or stop deployment. If it's so, > then please tell about it. > > [0] > https://github.com/openstack/fuel-astute/blob/master/lib/astute/nodes_remover.rb#L132-L137 > [1] > https://github.com/openstack/fuel-astute/blob/master/lib/astute/ssh_actions/ssh_erase_nodes.rb > > Best regards, > Svechnikov Artur > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151224/ee028521/attachment.html>