[openstack-dev] [Fuel] Wipe of the nodes' disks
Ryan Moe
rmoe at mirantis.com
Mon Dec 28 16:26:53 UTC 2015
>
>
> It's used in stop_deployment provision stage [0] and for control reboot
> [1].
>
> > Is it a fall back mechanism if the mcollective fails?
>
> Yes it's like fall back mechanism, but it's used always [2].
>
As I remember it the use of SSH for stopping provisioning was because of
our use of OS installers. While Anaconda was running the only access to the
system was with SSH.
>
> > That might have been a side effect of cobbler and we should test if it's
> > still an issue for IBP.
>
> As I can see from the code partition table always is wiped before
> provision [3].
>
There were some intermittent issues with provisioning failures
(particularly with Ceph as I recall) when we only wiped the disks before
provisioning. These failures were caused by stale LVM and RAID metadata.
Doing it when the nodes were deleted and again before provisioning fixed
these problems. I'm not sure this was a side-effect of our old provisioning
method either. When we relied on the OS installers we generated kickstart
and preseed files that just used the standard LVM and mdadm utilities to
partition the system.
Thanks,
-Ryan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151228/b54faa6b/attachment.html>
More information about the OpenStack-dev
mailing list