[openstack-dev] [nova] Problem with Quota and servers spawned in groups

Chris Friesen chris.friesen at windriver.com
Thu Nov 17 06:55:14 UTC 2016

On 11/17/2016 12:27 AM, Chris Friesen wrote:
> On 11/16/2016 03:55 PM, Sławek Kapłoński wrote:
>> As I said before, I was testing it and I didn't have instances in Error
>> state. Can You maybe check it once again on current master branch?
> I don't have a master devstack handy...will try and set one up.  I just tried on
> a stable/mitaka devstack--I bumped up the quotas and ran:
> nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --min-count 1
> --max-count 100 blah
> All the instances went to the "scheduling" state, the first 21 instances
> scheduled successfully then one failed the RamFilter.  I ended up with 100
> instances all in the "error" state.

I located a running devstack based on master, the nova repo was using commit 
633c817d from Nov 12.

It behaved the same...I jacked up the quotas to give it space, then ran:

nova boot --flavor m1.xlarge --image cirros-0.3.4-x86_64-uec --min-count 1 
--max-count 20 blah

The first nine instances scheduled successfully, the next one failed the 
RamFilter filter, and all the instances went to the "error" state.

This is what we'd expect given that in ComputeTaskManager.build_instances() if 
the call to self._schedule_instances() raises an exception we'll hit the 
"except" clause and loop over all the instances, setting them to the error 
state.  And down in FilterScheduler.select_destinations() we will raise an 
exception if we couldn't schedule all the hosts:

if len(selected_hosts) < num_instances:
	reason = _('There are not enough hosts available.')
	raise exception.NoValidHost(reason=reason)


More information about the OpenStack-dev mailing list