[Openstack-operators] [openstack-dev] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone
Ian Cordasco
ian.cordasco at RACKSPACE.COM
Thu Feb 19 16:49:03 UTC 2015
On 2/2/15, 15:41, "Morgan Fainberg" <morgan.fainberg at gmail.com> wrote:
>
>On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gordon0 at gmail.com)
>wrote:
>
>
>
>On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
><morgan.fainberg at gmail.com> wrote:
>
>I think the simple answer is "yes". We (keystone) should emit
>notifications. And yes other projects should listen.
>
>The only thing really in discussion should be:
>
>1: soft delete or hard delete? Does the service mark it as orphaned, or
>just delete (leave this to nova, cinder, etc to discuss)
>
>2: how to cleanup when an event is missed (e.g rabbit bus goes out to
>lunch).
>
>
>
>
>
>
>I disagree slightly, I don't think projects should directly listen to the
>Keystone notifications I would rather have the API be something from a
>keystone owned library, say keystonemiddleware. So something like this:
>
>
>from keystonemiddleware import janitor
>
>
>keystone_janitor = janitor.Janitor()
>keystone_janitor.register_callback(nova.tenant_cleanup)
>
>
>keystone_janitor.spawn_greenthread()
>
>
>That way each project doesn't have to include a lot of boilerplate code,
>and keystone can easily modify/improve/upgrade the notification mechanism.
>
>
>
>
>
>
>
>
>
>
>
>Sure. I’d place this into an implementation detail of where that actually
>lives. I’d be fine with that being a part of Keystone Middleware Package
>(probably something separate from auth_token).
>
>
>—Morgan
>
I think my only concern is what should other projects do and how much do
we want to allow operators to configure this? I can imagine it being
preferable to have safe (without losing much data) policies for this as a
default and to allow operators to configure more destructive policies as
part of deploying certain services.
>
>
>
>
>
>--Morgan
>
>Sent via mobile
>
>> On Feb 2, 2015, at 10:16, Matthew Treinish <mtreinish at kortar.org> wrote:
>>
>>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>>> This came up in the operators mailing list back in June [1] but given
>>>the
>>> subject probably didn't get much attention.
>>>
>>> Basically there is a really old bug [2] from Grizzly that is still a
>>>problem
>>> and affects multiple projects. A tenant can be deleted in Keystone
>>>even
>>> though other resources in other projects are under that project, and
>>>those
>>> resources aren't cleaned up.
>>
>> I agree this probably can be a major pain point for users. We've had to
>>work around it
>> in tempest by creating things like:
>>
>>
>http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_s
>ervice.py
><http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_
>service.py>
>> and
>>
>http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.p
>y
><http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.
>py>
>>
>> to ensure we aren't dangling resources after a run. But, this doesn't
>>work in
>> all cases either. (like with tenant isolation enabled)
>>
>> I also know there is a stackforge project that is attempting something
>>similar
>> here:
>>
>> http://git.openstack.org/cgit/stackforge/ospurge/
>>
>> It would be much nicer if the burden for doing this was taken off users
>>and this
>> was just handled cleanly under the covers.
>>
>>>
>>> Keystone implemented event notifications back in Havana [3] but the
>>>other
>>> projects aren't listening on them to know when a project has been
>>>deleted
>>> and act accordingly.
>>>
>>> The bug has several people saying "we should talk about this at the
>>>summit"
>>> for several summits, but I can't find any discussion or summit sessions
>>> related back to the bug.
>>>
>>> Given this is an operations and cross-project issue, I'd like to bring
>>>it up
>>> again for the Vancouver summit if there is still interest (which I'm
>>> assuming there is from operators).
>>
>> I'd definitely support having a cross-project session on this.
>>
>>>
>>> There is a blueprint specifically for the tenant deletion case but it's
>>> targeted at only Horizon [4].
>>>
>>> Is anyone still working on this? Is there sufficient interest in a
>>> cross-project session at the L summit?
>>>
>>> Thinking out loud, even if nova doesn't listen to events from
>>>keystone, we
>>> could at least have a periodic task that looks for instances where the
>>> tenant no longer exists in keystone and then take some action (log a
>>> warning, shutdown/archive/, reap, etc).
>>>
>>> There is also a spec for L to transfer instance ownership [5] which
>>>could
>>> maybe come into play, but I wouldn't depend on it.
>>>
>>> [1]
>http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.
>html
><http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559
>.html>
>>> [2] https://bugs.launchpad.net/nova/+bug/967832
>>> [3]
>https://blueprints.launchpad.net/keystone/+spec/notifications
><https://blueprints.launchpad.net/keystone/+spec/notifications>
>>> [4]
>https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
><https://blueprints.launchpad.net/horizon/+spec/tenant-deletion>
>>> [5] https://review.openstack.org/#/c/105367/
>>
>> -Matt Treinish
>
>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>_______________________________________________
>OpenStack-operators mailing list
>OpenStack-operators at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>
>
>
>
More information about the OpenStack-operators
mailing list