[openstack-dev] [Openstack] Optionally force instances to "stay put" on resize

John Garbutt john at johngarbutt.com
Tue Feb 19 12:20:51 UTC 2013


+1 to summit session

I think this is the other blueprint:
https://blueprints.launchpad.net/nova/+spec/host-maintenance

We also need to think about "block migration" in the --live case.
Not sure the user should be forced to choose though.

Added this blueprint and etherpad to help capture ideas so far:
https://etherpad.openstack.org/HavanaUnifyMigrateAndLiveMigrate

John

On 18 February 2013 16:31, Alex Glikson <GLIKSON at il.ibm.com> wrote:
>> If so, I can write up a blueprint and discussion for the design summit.
>
> +1
> There are few related operations which have been recently discussed, namely
> 'migrate' without specifying the target host (to be used in host maintenance
> scenario), and 'evacuate' (to be used in HA scenario).
> Orchestration of such higher level scenarios is probably another good
> candidate for a design summit topic.
>
> Regards,
> Alex Glikson
> IBM Research
>
> Jay Pipes <jaypipes at gmail.com> wrote on 18/02/2013 05:58:16 PM:
>
>> From: Jay Pipes <jaypipes at gmail.com>
>> To: openstack-dev at lists.openstack.org,
>> Date: 18/02/2013 06:01 PM
>> Subject: Re: [openstack-dev] [Openstack] Optionally force instances
>> to "stay put" on resize
>
>>
>> A while ago, I remember a discussion about the semantics around
>> migration and I think I recommended moving towards a model where we just
>> have a single migrate API call instead of the existing live and non-live
>> migration calls -- that very much tend to confuse users.
>>
>> Is there still interest in consolidating the calls so that the eventual
>> novaclient CLI call would just be:
>>
>> nova migrate [--live] [--hints=...] [--disk-over-commit]
>>
>> If so, I can write up a blueprint and discussion for the design summit.
>>
>> Best,
>> -jay
>>
>> On 02/18/2013 07:44 AM, John Garbutt wrote:
>> > This reminds me again of the differences between Migrate and
>> > Live-Migrate API calls.
>> > I think having the ability, in both cases, to do scheduler hints makes
>> > a lot of sense.
>> >
>> > I am thinking about admins and maintinace rather than end-users.
>> >
>> > So +1 to most of Alex's points.
>> >
>> > John
>> >
>> > On 16 February 2013 03:46, Michael Basnight <mbasnight at gmail.com> wrote:
>> >>
>> >> On Feb 15, 2013, at 9:35 PM, Michael J Fork wrote:
>> >>
>> >>> Adding general and operators for additional feedback.
>> >>>
>> >>> Michael J Fork/Rochester/IBM wrote on 02/15/2013 10:59:46 AM:
>> >>>
>> >>>> From: Michael J Fork/Rochester/IBM
>> >>>> To: openstack-dev at lists.openstack.org,
>> >>>> Date: 02/15/2013 10:59 AM
>> >>>> Subject: Optionally force instances to "stay put" on resize
>> >>>>
>> >>>> The patch for the configurable-resize-placement blueprint (https://
>> >>>> blueprints.launchpad.net/nova/+spec/configurable-resize-placement)
>> >>>> has generated a discussion on the review boards and needed to be
>> >>>> brought to the mailing list for broader feedback.
>> >>>>
>> >>>> tl;dr would others find useful the addition of a new config option
>> >>>> "resize_to_same_host" with values "allow", "require", "forbid" that
>> >>>> deprecates "allow_resize_to_same_host" (functionality equivalent to
>> >>>> "allow" and "forbid" in "resize_to_same_host")?  Existing use cases
>> >>>> and default behaviors are retained unchanged.  The new use case is
>> >>>> "resize_to_same_host = require" retains the exact same external API
>> >>>> sematics and would make it such that no user actions can cause a VM
>> >>>> migration (and the network traffic with it).  An administrator can
>> >>>> still perform a manual migration that would allow a subsequent
>> >>>> resize to succeed.  This patch would be most useful in environments
>> >>>> with 1GbE or with large ephemeral disks.
>> >>>>
>> >>>> Blueprint  Description
>> >>>>
>> >>>>> Currently OpenStack has a boolean "allow_resize_to_same_host"
>> >>>>> config option that constrains
>> >>>>> placement during resize. When this value is false, the
>> >>>>> ignore_hosts option is passed to the scheduler.
>> >>>>> When this value is true, no options are passed to the scheduler
>> >>>>> and the current host can be
>> >>>>> considered. In some use cases - e.g. PowerVM - a third option of
>> >>>>> "require same host' is desirable.
>> >>>>>
>> >>>>> This blueprint will deprecate the "allow_resize_to_same_host"
>> >>>>> config option and replace it with
>> >>>>> "resize_to_same_host" that supports 3 values - allow, forbid,
>> >>>>> require. Allow is equivalent to true in the
>> >>>>> current use case (i.e. not scheduler hint, current host is
>> >>>>> considered), forbid to false in current use case
>> >>>>> (i.e. the ignore_hosts scheduler hint is set), and require forces
>> >>>>> the same host through the use of the
>> >>>>> force_hosts scheduler hint.
>> >>>>
>> >>>> To avoid incorrectly paraphrasing others, the review comments
>> >>>> against the change are below in their entirety followed by my
>> >>>> comments to those concerns.  The question we are looking to answer -
>> >>>> would others find this function useful and / or believe that
>> >>>> OpenStack should have this option?
>> >>>>
>> >>>> Comments from https://review.openstack.org/#/c/21139/:
>> >>>>
>> >>>>> I still think this is a bad idea. The only reason the flag was
>> >>>>> there in the first place was so we could
>> >>>>> run tempest on devstack in the gate and test resize. Semantically
>> >>>>> this changes the meaning of resize
>> >>>>> in a way that I don't think should be done.
>> >>>>
>> >>>>> I understand what the patch does, and I even think it appears to
>> >>>>> be functionally correct based on
>> >>>>> what the intention appears to be. However, I'm not convinced that
>> >>>>> the option is a useful addition.
>> >>>>>
>> >>>>> First, it really just doesn't seem in the spirit of OpenStack or
>> >>>>> "cloud" to care this much about where
>> >>>>> the instance goes like this. The existing option was only a hack
>> >>>>> for testing, not something expected
>> >>>>> for admins to care about.
>> >>>>>
>> >>>>> If this really *is* something admins need to care about, I'd like
>> >>>>> to better understand why. Further, if
>> >>>>> that's the case, I'm not sure a global config option is the right
>> >>>>> way to go about it. I think it may make
>> >>>>> more sense to have this be API driven. I'd like to see some
>> >>>>> thoughts from others on this point."
>> >>>>
>> >>>>> "I completely agree with the "spirit of cloud" argument. I further
>> >>>>> think that exposing anything via the
>> >>>>> API that would support this (i.e. giving the users control or even
>> >>>>> indication of where their instance lands)
>> >>>>> is a dangerous precedent to set.
>> >>>>>
>> >>>>> I tend to think that this use case is so small and specialized,
>> >>>>> that it belongs in some other sort of policy
>> >>>>> implementation, and definitely not as yet-another-config-option to
>> >>>>> be exposed to the admins. That, or in
>> >>>>> some other project entirely :)"
>> >>>>
>> >>>> and my response to those concerns:
>> >>>>
>> >>>>> I agree this is not an 80% use case, or probably even that popular
>> >>>>> in the other 20%, but resize today
>> >>>>> is the only user facing API that can trigger the migration of a VM
>> >>>>> to a new machine. In some environments,
>> >>>>> this network traffic is undesirable - especially 1GBe - and may
>> >>>>> want to be explicitly controlled by an
>> >>>>> Administrator. In this implementation, an Admin can still invoke a
>> >>>>> migration manually to allow the resize to
>> >>>>> succeed. I would point to the Island work by Sina as an example,
>> >>>>> they wrote an entire Cinder driver
>> >>>>> designed to minimize network traffic.
>> >>>>>
>> >>>>> I agree with the point above that exposing this on an end-user API
>> >>>>> is not correct, users should not know
>> >>>>> or care where this goes. However, as the cloud operator, I should
>> >>>>> be able to have that level of control
>> >>>>> and this puts it in their hands.
>> >>>>>
>> >>>>> Obviously this option would need documented to allow
>> >>>>> administrators to decide if they need to change it,
>> >>>>> but it certainly wouldn't be default. Expectation is that it would
>> >>>>> of use in smaller installations or enterprise
>> >>>>> uses cases more often than service providers.
>> >>>>>
>> >>>>> Additionally, it continues to honor the existing resize API
>> >>>>> contract.
>> >>>>
>> >>>> An additional use case - beyond 1GbE - is if an environment uses
>> >>>> large ephemeral disks.
>> >>>
>> >>>> Would others find this function useful and / or believe that
>> >>>> OpenStack should have this option?  Again, the API contract is
>> >>>> unchanged and it gives a cloud operator an additional level of
>> >>>> control over the movement of instances.  It would not be the default
>> >>>> behavior, but rather enabled by an administrator depending on their
>> >>>> specific use cases and requirements and the environment they are in.
>> >>>>
>> >>>> Thanks.
>> >>>>
>> >>>> Michael
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Michael Fork
>> >>>> OpenStack Architect, Cloud Solutions and OpenStack Development
>> >>>> IBM Systems & Technology Group
>> >>>
>> >> Not that its in trunk right now, but openvz allows for online
>> memory resizing, so resizing to the same host is optimal. Personally
>> im not sure the 3 way switch, so to speak, is needed, but i would
>> like to see allow_resize_on_same_host to persist for container based
>> technologies. I cant say if lxc allows this, but maybe someone else
>> can speak to that.
>> >> _______________________________________________
>> >> Mailing list: https://launchpad.net/~openstack
>> >> Post to     : openstack at lists.launchpad.net
>> >> Unsubscribe : https://launchpad.net/~openstack
>> >> More help   : https://help.launchpad.net/ListHelp
>> >
>> >
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list