[openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

Zane Bitter zbitter at redhat.com
Wed Feb 25 23:42:42 UTC 2015


On 25/02/15 15:37, Joe Gordon wrote:
>
>
> On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell <Tim.Bell at cern.ch
> <mailto:Tim.Bell at cern.ch>> wrote:
>
>
>     A few inline comments and a general point
>
>     How do we handle scenarios like volumes when we have a per-component
>     janitor rather than a single co-ordinator ?
>
>     To be clean,
>
>     1. nova should shutdown the instance
>     2. nova should then ask the volume to be detached
>     3. cinder could then perform the 'project deletion' action as
>     configured by the operator (such as shelve or backup)
>     4. nova could then perform the 'project deletion' action as
>     configured by the operator (such as VM delete or shelve)
>
>     If we have both cinder and nova responding to a single message,
>     cinder would do 3. Immediately and nova would be doing the shutdown
>     which is likely to lead to a volume which could not be shelved cleanly.
>
>     The problem I see with messages is that co-ordination of the actions
>     may require ordering between the components.  The disable/enable
>     cases would show this in a worse scenario.
>
>
> You raise two good points.
>
> * How to clean something up may be different for different clouds
> * Some cleanup operations have to happen in a specific order
>
> Not sure what the best way to address those two points is.  Perhaps the
> best way forward is a openstack-specs spec to hash out these details.

For completeness, if nothing else, it should be noted that another 
option is for Keystone to refuse to delete the project until all 
resources within it have been removed by a user.

It's hard to know at this point which would be more painful. Both sound 
horrific in their own way :D

cheers,
Zane.

>
>     Tim
>
>      > -----Original Message-----
>      > From: Ian Cordasco [mailto:ian.cordasco at RACKSPACE.COM
>     <mailto:ian.cordasco at RACKSPACE.COM>]
>      > Sent: 19 February 2015 17:49
>      > To: OpenStack Development Mailing List (not for usage questions);
>     Joe Gordon
>      > Cc: openstack-operators at lists.openstack.org
>     <mailto:openstack-operators at lists.openstack.org>
>      > Subject: Re: [Openstack-operators] [openstack-dev] Resources
>     owned by a
>      > project/tenant are not cleaned up after that project is deleted
>     from keystone
>      >
>      >
>      >
>      > On 2/2/15, 15:41, "Morgan Fainberg" <morgan.fainberg at gmail.com
>     <mailto:morgan.fainberg at gmail.com>> wrote:
>      >
>      > >
>      > >On February 2, 2015 at 1:31:14 PM, Joe Gordon
>     (joe.gordon0 at gmail.com <mailto:joe.gordon0 at gmail.com>)
>      > >wrote:
>      > >
>      > >
>      > >
>      > >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
>      > ><morgan.fainberg at gmail.com <mailto:morgan.fainberg at gmail.com>>
>     wrote:
>      > >
>      > >I think the simple answer is "yes". We (keystone) should emit
>      > >notifications. And yes other projects should listen.
>      > >
>      > >The only thing really in discussion should be:
>      > >
>      > >1: soft delete or hard delete? Does the service mark it as
>     orphaned, or
>      > >just delete (leave this to nova, cinder, etc to discuss)
>      > >
>      > >2: how to cleanup when an event is missed (e.g rabbit bus goes
>     out to
>      > >lunch).
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >I disagree slightly, I don't think projects should directly
>     listen to
>      > >the Keystone notifications I would rather have the API be something
>      > >from a keystone owned library, say keystonemiddleware. So
>     something like
>      > this:
>      > >
>      > >
>      > >from keystonemiddleware import janitor
>      > >
>      > >
>      > >keystone_janitor = janitor.Janitor()
>      > >keystone_janitor.register_callback(nova.tenant_cleanup)
>      > >
>      > >
>      > >keystone_janitor.spawn_greenthread()
>      > >
>      > >
>      > >That way each project doesn't have to include a lot of boilerplate
>      > >code, and keystone can easily modify/improve/upgrade the
>     notification
>      > mechanism.
>      > >
>      > >
>
>
>     I assume janitor functions can be used for
>
>     - enable/disable project
>     - enable/disable user
>
>     > >
>     > >
>     > >
>     > >
>     > >
>     > >
>     > >
>     > >
>     > >
>     > >Sure. I’d place this into an implementation detail of where that
>     > >actually lives. I’d be fine with that being a part of Keystone
>     > >Middleware Package (probably something separate from auth_token).
>     > >
>     > >
>     > >—Morgan
>     > >
>     >
>     > I think my only concern is what should other projects do and how much do we
>     > want to allow operators to configure this? I can imagine it being preferable to
>     > have safe (without losing much data) policies for this as a default and to allow
>     > operators to configure more destructive policies as part of deploying certain
>     > services.
>     >
>
>     Depending on the cloud, an operator could want different semantics
>     for delete project's impact, between delete or 'shelve' style or
>     maybe disable.
>
>      >
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >--Morgan
>      > >
>      > >Sent via mobile
>      > >
>      > >> On Feb 2, 2015, at 10:16, Matthew Treinish
>     <mtreinish at kortar.org <mailto:mtreinish at kortar.org>> wrote:
>      > >>
>      > >>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>      > >>> This came up in the operators mailing list back in June [1] but
>      > >>>given the  subject probably didn't get much attention.
>      > >>>
>      > >>> Basically there is a really old bug [2] from Grizzly that is
>     still a
>      > >>>problem  and affects multiple projects.  A tenant can be
>     deleted in
>      > >>>Keystone even  though other resources in other projects are under
>      > >>>that project, and those  resources aren't cleaned up.
>      > >>
>      > >> I agree this probably can be a major pain point for users.
>     We've had
>      > >>to work around it  in tempest by creating things like:
>      > >>
>      > >>
>      >
>      >http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
>      > >p_s
>      > >ervice.py
>      >
>      ><http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/clean
>      > >up_
>      > >service.py>
>      > >> and
>      > >>
>      >
>      >http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
>      > >p.p
>      > >y
>      >
>      ><http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup
>      > .
>      > >py>
>      > >>
>      > >> to ensure we aren't dangling resources after a run. But, this
>     doesn't
>      > >>work in  all cases either. (like with tenant isolation enabled)
>      > >>
>      > >> I also know there is a stackforge project that is attempting
>      > >>something similar
>      > >> here:
>      > >>
>      > >> http://git.openstack.org/cgit/stackforge/ospurge/
>      > >>
>      > >> It would be much nicer if the burden for doing this was taken off
>      > >>users and this  was just handled cleanly under the covers.
>      > >>
>      > >>>
>      > >>> Keystone implemented event notifications back in Havana [3]
>     but the
>      > >>>other  projects aren't listening on them to know when a
>     project has
>      > >>>been deleted  and act accordingly.
>      > >>>
>      > >>> The bug has several people saying "we should talk about this
>     at the
>      > >>>summit"
>      > >>> for several summits, but I can't find any discussion or summit
>      > >>>sessions  related back to the bug.
>      > >>>
>      > >>> Given this is an operations and cross-project issue, I'd like to
>      > >>>bring it up  again for the Vancouver summit if there is still
>      > >>>interest (which I'm  assuming there is from operators).
>      > >>
>      > >> I'd definitely support having a cross-project session on this.
>      > >>
>      > >>>
>      > >>> There is a blueprint specifically for the tenant deletion
>     case but
>      > >>> it's targeted at only Horizon [4].
>      > >>>
>      > >>> Is anyone still working on this? Is there sufficient interest
>     in a
>      > >>> cross-project session at the L summit?
>      > >>>
>      > >>> Thinking out loud, even if nova doesn't listen to events from
>      > >>>keystone, we  could at least have a periodic task that looks for
>      > >>>instances where the  tenant no longer exists in keystone and then
>      > >>>take some action (log a  warning, shutdown/archive/, reap, etc).
>      > >>>
>      > >>> There is also a spec for L to transfer instance ownership [5]
>     which
>      > >>>could  maybe come into play, but I wouldn't depend on it.
>      > >>>
>      > >>> [1]
>      >
>      >http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.
>      > >html
>      >
>      ><http://lists.openstack.org/pipermail/openstack-operators/2014-June/004
>      > >559
>      > >.html>
>      > >>> [2] https://bugs.launchpad.net/nova/+bug/967832
>      > >>> [3]
>      > >https://blueprints.launchpad.net/keystone/+spec/notifications
>      > ><https://blueprints.launchpad.net/keystone/+spec/notifications>
>      > >>> [4]
>      > >https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
>      > ><https://blueprints.launchpad.net/horizon/+spec/tenant-deletion>
>      > >>> [5] https://review.openstack.org/#/c/105367/
>      > >>
>      > >> -Matt Treinish
>      > >
>      > >
>      > >> _______________________________________________
>      > >> OpenStack-operators mailing list
>      > >> OpenStack-operators at lists.openstack.org
>     <mailto:OpenStack-operators at lists.openstack.org>
>      > >>
>      >
>      >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>      >
>      ><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator
>      > >s>
>      > >
>      > >_______________________________________________
>      > >OpenStack-operators mailing list
>      > >OpenStack-operators at lists.openstack.org
>     <mailto:OpenStack-operators at lists.openstack.org>
>      >
>      >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >
>      > >
>      >
>      > _______________________________________________
>      > OpenStack-operators mailing list
>      > OpenStack-operators at lists.openstack.org
>     <mailto:OpenStack-operators at lists.openstack.org>
>      >
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list