[openstack-dev] [Nova][Heat] Where does "Shelving" belong

Joe Gordon joe.gordon0 at gmail.com
Tue Jun 25 17:54:10 UTC 2013


On Tue, Jun 25, 2013 at 10:38 AM, Andrew Laski
<andrew.laski at rackspace.com>wrote:

> On 06/25/13 at 09:42am, Joe Gordon wrote:
>
>> On Tue, Jun 25, 2013 at 7:22 AM, Andrew Laski <andrew.laski at rackspace.com
>> >**wrote:
>>
>>  I have a couple of reviews up to introduce the concept of shelving an
>>> instance into Nova.  The question has been raised as to whether or not
>>> this
>>> belongs in Nova, or more rightly belongs in Heat.  The blueprint for this
>>> feature can be found at https://blueprints.launchpad.****
>>> net/nova/+spec/shelve-**instance<https://blueprints.**
>>> launchpad.net/nova/+spec/**shelve-instance<https://blueprints.launchpad.net/nova/+spec/shelve-instance>
>>> >
>>> **, but to make things easy I'll outline some of the goals here.
>>>
>>>
>>> The main use case that's being targeted is a user who wishes to stop an
>>> instance at the end of a workday and then restart it again at the start
>>> of
>>> their next workday, either the next day or after a weekend.  From a
>>> service
>>> provider standpoint the difference between shelving and stopping an
>>> instance is that the contract allows removing that instance from the
>>> hypervisor at any point so unshelving may move it to another host.
>>>
>>>
>>
>> the part that caught my eye as something that *may* be in heat's domain
>> and
>> is at least worth a discussion is the snapshotting and periodic task part.
>>
>> from what I can tell, it sounds like the use case is for this is:  I want
>> to 'shutdown' my VM overnight and save money since I am not using it, but
>> I
>> want to keep everything looking the same.
>>
>> But in this use case I would want to automatically 'shelve' my instance
>> off
>> the compute-server every night (not leave it on the server) and every
>> morning I would want it to autostart before I get to work (and re-attach
>> my
>> volume and re-associate my floating-ip).  All of this sounds much closer
>> to
>> using heat and snapshotting then using 'shelving.'
>>
>
> The periodic task for removing a shelved instance from the hypervisor is a
> first pass attempt at a mechanism for reclaiming resources, and is under
> discussion and will probably evolve over time.  But the motivation for
> reclaiming resources will be driven by deployment capacity or the desire to
> reshuffle instances or maybe something else that's important to a deployer.
>  Not the user.  Since I see Heat as an advocate for user requests, not
> deployer concerns, I still think this falls outside of its concerns.
>


If this wasn't important to the user, they would never do it.  But since
this has the promise of cost saving for users, it does affect them.


That being said I think a decent argument has been made for why this should
be in nova.



>
> There's no concept of autostart included in shelving.  I agree that that
> gets beyond what should be performed in Nova.
>
>
>
>> Additionally, storing the shelved instance locally on the compute-node
>> until a simple periodic task to migrates 'shelved' instances off into deep
>> storage seems like it has undesired side-effects.  For example, as long as
>> the shelved instance is on a compute-node, you have to reserve CPU
>> resources for it, otherwise the instance may not be able to resume on the
>> same compute-node invalidating the benefits (as far as I can tell) of
>> keeping the instance locally snapshotted.
>>
>
> You're correct that there's not a large benefit to a deployer unless
> resources are reclaimed.  Perhaps some small power savings, and the freedom
> to migrate the instance transparently if desired.  I would prefer to remove
> the instance when it's shelved rather than waiting for something, like a
> periodic task or admin api call, to trigger it.  But booting disk based
> images can take a fairly long time so I've optimized for the case of an
> instance being shelved for a day or a weekend.  That way users get
> acceptable unshelve times for the expected case, and deployers benefit when
> an instance is shelved longer term.  I don't think this needs to be set in
> stone and the internal working can be modified as we find ways to improve
> it.
>
>
>>
>>
>>
>>> From a user standpoint what they're looking for is:
>>>
>>> The ability to retain the endpoint for API calls on that instance.  So
>>> v2/<tenant_id>/servers/<****server_id> continues to work after the
>>> instance
>>>
>>> is unshelved.
>>>
>>> All networking, attached volumes, admin pass, metadata, and other user
>>> configurable properties remain unchanged when shelved/unshelved.  Other
>>> properties like task/vm/power state, host, *_at, may change.
>>>
>>> The ability to see that instance in their list of servers when shelved.
>>>
>>>
>> This sounds like a good reason to keep this in nova.
>>
>>
>>
>>>
>>>
>>> Again, the objection that has been raised is that it seems like
>>> orchestration and therefore would belong in Heat.  While this is somewhat
>>> similar to a snapshot/destroy/rebuild workflow there are certain
>>> properties
>>> of shelving in Nova that I can't see how to reproduce by handling this
>>> externally.  At least not without exposing Nova internals beyond a
>>> comfortable level.
>>>
>>>
>> What properties are those, and more importantly why I need them?
>>
>
> Mainly uuid, but also the server listing.  If Heat snapshots and removes
> an instance it has no way to recreate it with the same uuid.  As much as I
> wish it wasn't the case, this is important to users.
>

*sigh*


>
>
>>
>>
>>> So I'd like to understand what the thinking is around why this belongs in
>>> Heat, and how that could be accomplished.
>>>
>>> ______________________________****_________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.****org <OpenStack-dev at lists.**
>>> openstack.org <OpenStack-dev at lists.openstack.org>>
>>> http://lists.openstack.org/****cgi-bin/mailman/listinfo/****
>>> openstack-dev<http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev>
>>> <http://lists.**openstack.org/cgi-bin/mailman/**listinfo/openstack-dev<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>> >
>>>
>>>
>  ______________________________**_________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.**org <OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>
>
> ______________________________**_________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.**org <OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130625/13e83286/attachment.html>


More information about the OpenStack-dev mailing list