[openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?
Vladyslav Drok
vdrok at mirantis.com
Fri May 27 15:32:40 UTC 2016
On Fri, May 27, 2016 at 5:52 PM, Vasyl Saienko <vsaienko at mirantis.com>
wrote:
> Lucas, Andrew
>
> Thanks for fast response.
>
> On Fri, May 27, 2016 at 4:53 PM, Andrew Laski <andrew at lascii.com> wrote:
>
>>
>>
>> On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
>> > Hi,
>> >
>> > Thanks for bringing this up Vasyl!
>> >
>> > > At the moment Nova with ironic virt_driver consider instance as
>> deleted,
>> > > while on Ironic side server goes to cleaning which can take a while.
>> As
>> > > result current implementation of Nova tempest tests doesn't work for
>> case
>> > > when Ironic is enabled.
>>
>> What is the actual failure? Is it a capacity issue because nodes do not
>> become available again quickly enough?
>>
>>
> The actual failure is that temepest community doesn't want to accept 1
> option.
> https://review.openstack.org/315422/
> And I'm not sure that it is the right way.
>
The reason this was added was to make tempest smoke tests (as part of
grenade) to pass on a limited amount of nodes (which was 3 initially). Now
we have 7 nodes created in the gate, so we might be OK running grenade, but
we can't increase concurrency to something more than 1 in this case. Maybe
we should run our own tests, not smoke, as part of grenade?
>
> > >
>> > > There are two possible options how to fix it:
>> > >
>> > > Update Nova tempest test scenarios for Ironic case to wait when
>> cleaning is
>> > > finished and Ironic node goes to 'available' state.
>> > >
>> > > Mark instance as deleted in Nova only after cleaning is finished on
>> Ironic
>> > > side.
>> > >
>> > > I'm personally incline to 2 option. From user side successful instance
>> > > termination means that no instance data is available any more, and
>> nobody
>> > > can access/restore that data. Current implementation breaks this rule.
>> > > Instance is marked as successfully deleted while in fact it may be not
>> > > cleaned, it may fail to clean and user will not know anything about
>> it.
>> > >
>
> >
>> > I don't really like option #2, cleaning can take several hours
>> > depending on the configuration of the node. I think that it would be a
>> > really bad experience if the user of the cloud had to wait a really
>> > long time before his resources are available again once he deletes an
>> > instance. The idea of marking the instance as deleted in Nova quickly
>> > is aligned with our idea of making bare metal deployments
>> > look-and-feel like VMs for the end user. And also (one of) the
>> > reason(s) why we do have a separated state in Ironic for DELETING and
>> > CLEANING.
>>
>
> The resources will be available only if there are other available
> baremetal nodes in the cloud.
> User doesn't have ability to track for status of available resources
> without admin access.
>
>
>> I agree. From a user perspective once they've issued a delete their
>> instance should be gone. Any delay in that actually happening is purely
>> an internal implementation detail that they should not care about.
>>
>> >
>> > I think we should go with #1, but instead of erasing the whole disk
>> > for real maybe we should have a "fake" clean step that runs quickly
>> > for tests purposes only?
>> >
>>
>
> At the gates we just waiting for bootstrap and callback from node when
> cleaning starts. All heavy operations are postponed. We can disable
> automated_clean, which means it is not tested.
>
>
>> > Cheers,
>> > Lucas
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160527/b5b82f57/attachment.html>
More information about the OpenStack-dev
mailing list