[Openstack-operators] How to remove a Zone from openstack

Alex Leonhardt aleonhardt.py at gmail.com
Fri Jan 9 15:44:05 UTC 2015


Hi,

So I tried this - but doesn't seem to work - I still get the "unable to
find host" for the instance :| when specifying a flavour to use a specific
zone..

so I create a new aggregate and zone :

nova aggregate-create C3 C3
+----+------+-------------------+-------+-------------------------+
| Id | Name | Availability Zone | Hosts | Metadata                |
+----+------+-------------------+-------+-------------------------+
| 5  | C3  | C3               |       | 'availability_zone=C3' |
+----+------+-------------------+-------+-------------------------+

then I add a host to that aggregate :

nova aggregate-add-host C3 stack24.internal
+----+------+-------------------+------------------+-------------------------+
| Id | Name | Availability Zone | Hosts            | Metadata
 |
+----+------+-------------------+------------------+-------------------------+
| 5  | C3  | C3               | 'stack24.internal' | 'availability_zone=C3'
|
+----+------+-------------------+------------------+-------------------------+

then I set the metadata for the aggregate :

nova aggregate-set-metadata C3 not-controller=true
+----+------+-------------------+------------------+------------------------------------------------+
| Id | Name | Availability Zone | Hosts            | Metadata
                        |
+----+------+-------------------+------------------+------------------------------------------------+
| 5  | C3  | C3               | 'stack24.internal' |
'availability_zone=C3', 'not-controller=true' |
+----+------+-------------------+------------------+------------------------------------------------+

then I set the flavour to be locked to that zone/aggregate :

nova flavor-key 558213d9-7641-4f9a-a9c4-2685e75f268d set not-controller=true
nova flavor-show 558213d9-7641-4f9a-a9c4-2685e75f268d
+----------------------------+--------------------------------------+
| Property                   | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 10                                   |
| extra_specs                | {"not-controller": "true"}           |
| id                         | 558213d9-7641-4f9a-a9c4-2685e75f268d |
| name                       | t1.micro                             |
| os-flavor-access:is_public | True                                 |
| ram                        | 768                                  |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 1                                    |
+----------------------------+--------------------------------------+

And when I try to create a Instance, all I can get into the logs (debug is
disabled):

2015-01-09 15:26:39.149 28901 WARNING nova.scheduler.driver
[req-8d50f7ab-9f0c-4ac9-b218-c95450d46895 Alex Leonhardt
5cec227f266b4e3d85b106daae5deed8] [instance:
3b300f3a-2b04-4063-9bbb-b8295c8729b8] Setting instance to ERROR state.

The HV I expected this to be scheduled for (stack24.internal) does not have
any of the UUIDs in its logs.

FWIW - am running Icehouse on CentOS 6.5 :

openstack-neutron-2014.1.1-3.el6.noarch
openstack-nova-conductor-2014.1.1-2.el6.noarch
openstack-dashboard-2014.1.1-1.el6.noarch
openstack-nova-api-2014.1.1-2.el6.noarch
openstack-utils-2014.1-3.el6.noarch
openstack-neutron-openvswitch-2014.1.1-3.el6.noarch
openstack-nova-compute-2014.1.1-2.el6.noarch
openstack-nova-console-2014.1.1-2.el6.noarch
python-django-openstack-auth-1.1.5-1.el6.noarch
openstack-keystone-2014.1.1-1.el6.noarch
openstack-nova-cert-2014.1.1-2.el6.noarch
openstack-glance-2014.1.1-1.el6.noarch
openstack-nova-common-2014.1.1-2.el6.noarch
openstack-nova-novncproxy-2014.1.1-2.el6.noarch
openstack-nova-scheduler-2014.1.1-2.el6.noarch

Any ideas ?

Thanks!
Alex





On Fri Jan 09 2015 at 11:06:26 Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

> Ah, the link helped - I was trying to get the extra spec filter to work
> the other day but messed it up - i think i forgot a nova config option :
>
> scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter
>
> So I'll try that now that should help me prevent things going onto that
> zone / host / aggregate.
>
> Thanks!
>
> Alex
>
>
> On Fri Jan 09 2015 at 11:02:45 Alex Leonhardt <aleonhardt.py at gmail.com>
> wrote:
>
>> Hi Belmiro,
>>
>> Thanks that helps - where is that config option to be set ?
>>
>> We do have 2 new aggregates which are zoneA and zoneB, both with 4-6
>> hypervisors at the moment. But somehow the default "nova" zone still exists
>> with a host that I want to keep "private" (or unused for now). Any ideas ?
>>
>> Thanks!
>>
>> Alex
>>
>>
>> On Fri Jan 09 2015 at 10:38:42 Belmiro Moreira <
>> moreira.belmiro.email.lists at gmail.com> wrote:
>>
>>> Hi Alex,
>>> you need to create "host aggregates" to define other availability zones.
>>> For more info see: http://docs.openstack.org/havana/config-reference/
>>> content/host-aggregates.html
>>>
>>> The default availability zone can be changed with the configuration
>>> option:
>>> default_availability_zone
>>>
>>> Belmiro
>>>
>>> On Fri, Jan 9, 2015 at 10:38 AM, Alex Leonhardt <aleonhardt.py at gmail.com
>>> > wrote:
>>>
>>>> Hi,
>>>>
>>>> we seem to get a default "nova" zone which is available to all users to
>>>> create VMs on - that by itself is fine - but I want the users limited to
>>>> use other zones only, and not the default nova zone. Is it possible to do
>>>> that ? If so, where do I do that and how ?
>>>>
>>>> Any links / docs would be much appreciated!
>>>>
>>>> Thanks!
>>>> Alex
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150109/ade70c70/attachment.html>


More information about the OpenStack-operators mailing list