[Openstack] Pike NOVA Disable and Live Migrate all instances.

Steven D. Searles SSearles at zimcom.net
Wed Sep 20 14:59:33 UTC 2017


Done, thanks for the assistance Chris and everyone. 

https://bugs.launchpad.net/nova/+bug/1718455



Steven Searles





On 9/20/17, 10:44 AM, "Chris Friesen" <chris.friesen at windriver.com> wrote:

>
>I think that points to a problem in nova.  Could you open a bug at 
>"bugs.launchpad.net/nova/+filebug" and report the bug number in this thread?
>
>Thanks,
>Chris
>
>On 09/19/2017 10:42 PM, Steven D. Searles wrote:
>> Chris, You are definitely on to something here.  When I create the instances individually this condition does NOT occur.  I confirmed this by creating 20 instances with openstack server create all on the same host. I then set the host to disabled in horizon and used the migrate host button.  This time the scheduler worked as expected, migrated one at a time as specified in the controller max_concurrent_live_migrations=1 and queued the rest until they all completed and the host was empty.
>>
>> Steven Searles
>>
>>
>>
>> -----Original Message-----
>> From: Steven D. Searles [mailto:SSearles at zimcom.net]
>> Sent: Wednesday, September 20, 2017 12:16 AM
>> To: Chris Friesen <chris.friesen at windriver.com>; openstack at lists.openstack.org
>> Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.
>>
>> I did.  I will spawn a few singles and see if it does the same thing.
>>
>>
>> Steven Searles
>>
>>
>>
>>
>> -----Original Message-----
>> From: Chris Friesen [mailto:chris.friesen at windriver.com]
>> Sent: Tuesday, September 19, 2017 11:17 PM
>> To: openstack at lists.openstack.org
>> Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.
>>
>> On 09/19/2017 05:21 PM, Steven D. Searles wrote:
>>> Hello everyone and thanks in advance.  I have Openstack Pike
>>> (KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade
>>> and am seeing a possible issue with disabling a host and live
>>> migrating the instances off via the horizon interface. I can migrate
>>> the instances individually via the Openstack client without issue. It
>>> looks like I might be missing something relating to concurrent jobs in
>>> my nova config? Interestingly enough when a migrate host is attempted via horizon they all fail.  Migrating a single instance through the horizon
>>> interface does function.   Below is what I am seeing in my scheduler log on the
>>> controller when trying to live migrate all instances from a disabled
>>> host. I believe the last line to be the obvious issue but I can not
>>> find a nova variable that seems to relate to this.  Can anyone point me in the right direction?
>>
>> There's a default limit of 1 outgoing live migration per compute node.  I don't think that's the whole issue here though.
>>
>>> *2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filter_scheduler
>>> [req-4268ea83-0657-40cc-961b-f0ae9fb3019e
>>> 385c60230b3f49da930dda4d089eda6b
>>> 723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts
>>> available but 10 instances requested to build. select_destinations
>>> /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:10
>>> 1*
>>
>> It's unclear to me why it's trying to schedule 10 instances all at once.  Did you originally create all the instances as part of a single boot request?
>>
>> Chris
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>


More information about the Openstack mailing list