<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 25, 2015 at 3:02 PM, Matt Joyce <span dir="ltr"><<a href="mailto:matt@nycresistor.com" target="_blank">matt@nycresistor.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Wondering if heat should be performing this orchestration.<br></blockquote><div><br></div><div>I wouldn't expect heat to have access to everything that needs to be cleaned up.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Would provide for a more pluggable front end to the action set.<br>
<span class="HOEnZb"><font color="#888888"><br>
-matt<br>
</font></span><div><div class="h5"><br>
On Feb 25, 2015 2:37 PM, Joe Gordon <<a href="mailto:joe.gordon0@gmail.com">joe.gordon0@gmail.com</a>> wrote:<br>
><br>
><br>
><br>
> On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell <<a href="mailto:Tim.Bell@cern.ch">Tim.Bell@cern.ch</a>> wrote:<br>
>><br>
>><br>
>> A few inline comments and a general point<br>
>><br>
>> How do we handle scenarios like volumes when we have a per-component janitor rather than a single co-ordinator ?<br>
>><br>
>> To be clean,<br>
>><br>
>> 1. nova should shutdown the instance<br>
>> 2. nova should then ask the volume to be detached<br>
>> 3. cinder could then perform the 'project deletion' action as configured by the operator (such as shelve or backup)<br>
>> 4. nova could then perform the 'project deletion' action as configured by the operator (such as VM delete or shelve)<br>
>><br>
>> If we have both cinder and nova responding to a single message, cinder would do 3. Immediately and nova would be doing the shutdown which is likely to lead to a volume which could not be shelved cleanly.<br>
>><br>
>> The problem I see with messages is that co-ordination of the actions may require ordering between the components. The disable/enable cases would show this in a worse scenario.<br>
><br>
><br>
> You raise two good points. <br>
><br>
> * How to clean something up may be different for different clouds<br>
> * Some cleanup operations have to happen in a specific order<br>
><br>
> Not sure what the best way to address those two points is. Perhaps the best way forward is a openstack-specs spec to hash out these details.<br>
><br>
> <br>
>><br>
>> Tim<br>
>><br>
>> > -----Original Message-----<br>
>> > From: Ian Cordasco [mailto:<a href="mailto:ian.cordasco@RACKSPACE.COM">ian.cordasco@RACKSPACE.COM</a>]<br>
>> > Sent: 19 February 2015 17:49<br>
>> > To: OpenStack Development Mailing List (not for usage questions); Joe Gordon<br>
>> > Cc: <a href="mailto:openstack-operators@lists.openstack.org">openstack-operators@lists.openstack.org</a><br>
>> > Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by a<br>
>> > project/tenant are not cleaned up after that project is deleted from keystone<br>
>> ><br>
>> ><br>
>> ><br>
>> > On 2/2/15, 15:41, "Morgan Fainberg" <<a href="mailto:morgan.fainberg@gmail.com">morgan.fainberg@gmail.com</a>> wrote:<br>
>> ><br>
>> > ><br>
>> > >On February 2, 2015 at 1:31:14 PM, Joe Gordon (<a href="mailto:joe.gordon0@gmail.com">joe.gordon0@gmail.com</a>)<br>
>> > >wrote:<br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg<br>
>> > ><<a href="mailto:morgan.fainberg@gmail.com">morgan.fainberg@gmail.com</a>> wrote:<br>
>> > ><br>
>> > >I think the simple answer is "yes". We (keystone) should emit<br>
>> > >notifications. And yes other projects should listen.<br>
>> > ><br>
>> > >The only thing really in discussion should be:<br>
>> > ><br>
>> > >1: soft delete or hard delete? Does the service mark it as orphaned, or<br>
>> > >just delete (leave this to nova, cinder, etc to discuss)<br>
>> > ><br>
>> > >2: how to cleanup when an event is missed (e.g rabbit bus goes out to<br>
>> > >lunch).<br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > >I disagree slightly, I don't think projects should directly listen to<br>
>> > >the Keystone notifications I would rather have the API be something<br>
>> > >from a keystone owned library, say keystonemiddleware. So something like<br>
>> > this:<br>
>> > ><br>
>> > ><br>
>> > >from keystonemiddleware import janitor<br>
>> > ><br>
>> > ><br>
>> > >keystone_janitor = janitor.Janitor()<br>
>> > >keystone_janitor.register_callback(nova.tenant_cleanup)<br>
>> > ><br>
>> > ><br>
>> > >keystone_janitor.spawn_greenthread()<br>
>> > ><br>
>> > ><br>
>> > >That way each project doesn't have to include a lot of boilerplate<br>
>> > >code, and keystone can easily modify/improve/upgrade the notification<br>
>> > mechanism.<br>
>> > ><br>
>> > ><br>
>><br>
>><br>
>> I assume janitor functions can be used for<br>
>><br>
>> - enable/disable project<br>
>> - enable/disable user<br>
>><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > >Sure. I’d place this into an implementation detail of where that<br>
>> > >actually lives. I’d be fine with that being a part of Keystone<br>
>> > >Middleware Package (probably something separate from auth_token).<br>
>> > ><br>
>> > ><br>
>> > >—Morgan<br>
>> > ><br>
>> ><br>
>> > I think my only concern is what should other projects do and how much do we<br>
>> > want to allow operators to configure this? I can imagine it being preferable to<br>
>> > have safe (without losing much data) policies for this as a default and to allow<br>
>> > operators to configure more destructive policies as part of deploying certain<br>
>> > services.<br>
>> ><br>
>><br>
>> Depending on the cloud, an operator could want different semantics for delete project's impact, between delete or 'shelve' style or maybe disable.<br>
>><br>
>> ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > >--Morgan<br>
>> > ><br>
>> > >Sent via mobile<br>
>> > ><br>
>> > >> On Feb 2, 2015, at 10:16, Matthew Treinish <<a href="mailto:mtreinish@kortar.org">mtreinish@kortar.org</a>> wrote:<br>
>> > >><br>
>> > >>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:<br>
>> > >>> This came up in the operators mailing list back in June [1] but<br>
>> > >>>given the subject probably didn't get much attention.<br>
>> > >>><br>
>> > >>> Basically there is a really old bug [2] from Grizzly that is still a<br>
>> > >>>problem and affects multiple projects. A tenant can be deleted in<br>
>> > >>>Keystone even though other resources in other projects are under<br>
>> > >>>that project, and those resources aren't cleaned up.<br>
>> > >><br>
>> > >> I agree this probably can be a major pain point for users. We've had<br>
>> > >>to work around it in tempest by creating things like:<br>
>> > >><br>
>> > >><br>
>> > ><a href="http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu" target="_blank">http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu</a><br>
>> > >p_s<br>
>> > >ervice.py<br>
>> > ><<a href="http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/clean" target="_blank">http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/clean</a><br>
>> > >up_<br>
>> > >service.py><br>
>> > >> and<br>
>> > >><br>
>> > ><a href="http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu" target="_blank">http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu</a><br>
>> > >p.p<br>
>> > >y<br>
>> > ><<a href="http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup" target="_blank">http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup</a><br>
>> > .<br>
>> > >py><br>
>> > >><br>
>> > >> to ensure we aren't dangling resources after a run. But, this doesn't<br>
>> > >>work in all cases either. (like with tenant isolation enabled)<br>
>> > >><br>
>> > >> I also know there is a stackforge project that is attempting<br>
>> > >>something similar<br>
>> > >> here:<br>
>> > >><br>
>> > >> <a href="http://git.openstack.org/cgit/stackforge/ospurge/" target="_blank">http://git.openstack.org/cgit/stackforge/ospurge/</a><br>
>> > >><br>
>> > >> It would be much nicer if the burden for doing this was taken off<br>
>> > >>users and this was just handled cleanly under the covers.<br>
>> > >><br>
>> > >>><br>
>> > >>> Keystone implemented event notifications back in Havana [3] but the<br>
>> > >>>other projects aren't listening on them to know when a project has<br>
>> > >>>been deleted and act accordingly.<br>
>> > >>><br>
>> > >>> The bug has several people saying "we should talk about this at the<br>
>> > >>>summit"<br>
>> > >>> for several summits, but I can't find any discussion or summit<br>
>> > >>>sessions related back to the bug.<br>
>> > >>><br>
>> > >>> Given this is an operations and cross-project issue, I'd like to<br>
>> > >>>bring it up again for the Vancouver summit if there is still<br>
>> > >>>interest (which I'm assuming there is from operators).<br>
>> > >><br>
>> > >> I'd definitely support having a cross-project session on this.<br>
>> > >><br>
>> > >>><br>
>> > >>> There is a blueprint specifically for the tenant deletion case but<br>
>> > >>> it's targeted at only Horizon [4].<br>
>> > >>><br>
>> > >>> Is anyone still working on this? Is there sufficient interest in a<br>
>> > >>> cross-project session at the L summit?<br>
>> > >>><br>
>> > >>> Thinking out loud, even if nova doesn't listen to events from<br>
>> > >>>keystone, we could at least have a periodic task that looks for<br>
>> > >>>instances where the tenant no longer exists in keystone and then<br>
>> > >>>take some action (log a warning, shutdown/archive/, reap, etc).<br>
>> > >>><br>
>> > >>> There is also a spec for L to transfer instance ownership [5] which<br>
>> > >>>could maybe come into play, but I wouldn't depend on it.<br>
>> > >>><br>
>> > >>> [1]<br>
>> > ><a href="http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559</a>.<br>
>> > >html<br>
>> > ><<a href="http://lists.openstack.org/pipermail/openstack-operators/2014-June/004" target="_blank">http://lists.openstack.org/pipermail/openstack-operators/2014-June/004</a><br>
>> > >559<br>
>> > >.html><br>
>> > >>> [2] <a href="https://bugs.launchpad.net/nova/+bug/967832" target="_blank">https://bugs.launchpad.net/nova/+bug/967832</a><br>
>> > >>> [3]<br>
>> > ><a href="https://blueprints.launchpad.net/keystone/+spec/notifications" target="_blank">https://blueprints.launchpad.net/keystone/+spec/notifications</a><br>
>> > ><<a href="https://blueprints.launchpad.net/keystone/+spec/notifications" target="_blank">https://blueprints.launchpad.net/keystone/+spec/notifications</a>><br>
>> > >>> [4]<br>
>> > ><a href="https://blueprints.launchpad.net/horizon/+spec/tenant-deletion" target="_blank">https://blueprints.launchpad.net/horizon/+spec/tenant-deletion</a><br>
>> > ><<a href="https://blueprints.launchpad.net/horizon/+spec/tenant-deletion" target="_blank">https://blueprints.launchpad.net/horizon/+spec/tenant-deletion</a>><br>
>> > >>> [5] <a href="https://review.openstack.org/#/c/105367/" target="_blank">https://review.openstack.org/#/c/105367/</a><br>
>> > >><br>
>> > >> -Matt Treinish<br>
>> > ><br>
>> > ><br>
>> > >> _______________________________________________<br>
>> > >> OpenStack-operators mailing list<br>
>> > >> <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
>> > >><br>
>> > ><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>> > ><<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator</a><br>
>> > >s><br>
>> > ><br>
>> > >_______________________________________________<br>
>> > >OpenStack-operators mailing list<br>
>> > ><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
>> > ><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> ><br>
>> > _______________________________________________<br>
>> > OpenStack-operators mailing list<br>
>> > <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
>> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
><br>
><br>
</div></div>__________________________________________________________________________<br>
<span class="im HOEnZb">OpenStack Development Mailing List (not for usage questions)<br>
</span><div class="HOEnZb"><div class="h5">Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>