[Openstack] Problem when Scheduling across zones

Pedro Navarro Pérez pednape at gmail.com
Wed Oct 5 13:40:58 UTC 2011


Any suggestions?

2011/10/4 Pedro Navarro Pérez <pednape at gmail.com>:
> the nova zone-list output:
>
> nova zone-list
> +----+------+-----------+----------------------------------+---------------+--------------+
> | ID | Name | Is Active |             API URL              | Weight
> Offset | Weight Scale |
> +----+------+-----------+----------------------------------+---------------+--------------+
> | 1  | h1   | True      | http://192.168.124.53:8774/v1.1/ |
>    |              |
> +----+------+-----------+----------------------------------+---------------+--------------+
>
> Thanks for you help!
>
> On Mon, Oct 3, 2011 at 8:44 PM, Sandy Walsh <sandy.walsh at rackspace.com> wrote:
>> You seem to doing things correctly.
>>
>> Can you paste the output from 'nova zone-list' in the parent zone please?
>>
>> -Sandy
>> ________________________________________
>> From: openstack-bounces+sandy.walsh=rackspace.com at lists.launchpad.net [openstack-bounces+sandy.walsh=rackspace.com at lists.launchpad.net] on behalf of Pedro Navarro Pérez [pednape at gmail.com]
>> Sent: Monday, October 03, 2011 8:30 AM
>> To: openstack at lists.launchpad.net
>> Subject: [Openstack] Problem when Scheduling across zones
>>
>> Hi all,
>>
>> I'm about to test the scheduling across zones functionality in diablo,
>> but the run instance command does not propagate correctly across the
>> child zones.
>>
>> My environment:
>>
>> 3 VM's with diablo installed.
>>
>> PARENT ZONE: Europe1 [192.168.124.47]
>>                               |
>>                               |
>>       CHILD ZONE: Huddle1 [192.168.124.53]
>>                               |
>>                               |
>>               HOST: Machine1 [192.168.124.44]
>>
>> Configuration and commands in Machine1:
>>
>> --dhcpbridge_flagfile=/etc/nova/nova.conf
>> --dhcpbridge=/usr/bin/nova-dhcpbridge
>> --logdir=/var/log/nova
>> --state_path=/var/lib/nova
>> --lock_path=/var/lock/nova
>> --flagfile=/etc/nova/nova-compute.conf
>> --force_dhcp_release=True
>> --use_deprecated_auth
>> --verbose
>> --sql_connection=mysql://novadbuser:novaDBsekret@192.168.124.53/nova
>> --network_manager=nova.network.manager.FlatDHCPManager
>> --flat_network_bridge=br100
>> --flat_injected=False
>> --flat_interface=eth3
>> --public_interface=eth3
>> --vncproxy_url=http://192.168.124.53:6080
>> --daemonize=1
>> --rabbit_host=192.168.124.53
>> --osapi_host=192.168.124.53
>> --ec2_host=192.168.124.53
>> --image_service=nova.image.glance.GlanceImageService
>> --glance_api_servers=192.168.124.53:9292
>> --use_syslog
>> --libvirt_type=qemu
>>
>> Configuration and commands in Huddle1:
>>
>> --dhcpbridge_flagfile=/etc/nova/nova.conf
>> --dhcpbridge=/usr/bin/nova-dhcpbridge
>> --logdir=/var/log/nova
>> --state_path=/var/lib/nova
>> --lock_path=/var/lock/nova
>> --flagfile=/etc/nova/nova-compute.conf
>> --force_dhcp_release=True
>> --use_deprecated_auth
>> --verbose
>> --sql_connection=mysql://novadbuser:novaDBsekret@192.168.124.53/nova
>> --network_manager=nova.network.manager.FlatDHCPManager
>> --flat_network_bridge=br100
>> --flat_injected=False
>> --flat_interface=eth3
>> --public_interface=eth3
>> --vncproxy_url=http://192.168.124.53:6080
>> --daemonize=1
>> --rabbit_host=192.168.124.53
>> --osapi_host=192.168.124.53
>> --ec2_host=192.168.124.53
>> --image_service=nova.image.glance.GlanceImageService
>> --glance_api_servers=192.168.124.53:9292
>> --use_syslog
>> --libvirt_type=qemu
>> --allow_admin_api=true
>> --enable_zone_routing=true
>> --zone_name=h1
>> --build_plan_encryption_key=c286696d887c9aa0611bbb3e2025a478
>> --scheduler_driver=nova.scheduler.base_scheduler.BaseScheduler
>> --default_host_filter=nova.scheduler.filters.AllHostsFilter
>>
>>>> sudo nova-manage service disable h1.ostack.ds nova-compute
>>
>> Configuration and commands in Europe1:
>>
>> --dhcpbridge_flagfile=/etc/nova/nova.conf
>> --dhcpbridge=/usr/bin/nova-dhcpbridge
>> --logdir=/var/log/nova
>> --state_path=/var/lib/nova
>> --lock_path=/var/lock/nova
>> --flagfile=/etc/nova/nova-compute.conf
>> --force_dhcp_release=True
>> --use_deprecated_auth
>> --verbose
>> --sql_connection=mysql://novadbuser:novaDBsekret@192.168.124.47/nova
>> --network_manager=nova.network.manager.FlatDHCPManager
>> --flat_network_bridge=br100
>> --flat_injected=False
>> --flat_interface=eth2
>> --public_interface=eth2
>> --vncproxy_url=http://192.168.124.47:6080
>> --daemonize=1
>> --rabbit_host=192.168.124.47
>> --osapi_host=192.168.124.47
>> --ec2_host=192.168.124.47
>> --image_service=nova.image.glance.GlanceImageService
>> --glance_api_servers=192.168.124.47:9292
>> --use_syslog
>> --libvirt_type=qemu
>> --allow_admin_api=true
>> --enable_zone_routing=true
>> --zone_name=Europe1
>> --build_plan_encryption_key=on3u4jvvbtnpkvi075vmcu88wzgpgnyp
>> --scheduler_driver=nova.scheduler.base_scheduler.BaseScheduler
>>
>>>> nova zone-add --zone_username cloudroot --password bf22b691-2581-4b2c-80e3-808fdd5dad4c http://192.168.124.53:8774/v1.1/
>>
>>>> nova zone-boot --image 3 --flavor 1 test
>>
>> The nova-scheduler.log shows that:
>>
>> 1. The zone has been succesfully detected:
>>
>> 2011-10-03 13:16:02,009 DEBUG nova [-] Polling zone:
>> http://192.168.124.53:8774/v1.1/ from (pid=1118) _poll_zone
>> /usr/lib/python2.7/dist-packages/nova/scheduler/zone_manager.py:100
>> 2011-10-03 13:16:02,047 DEBUG novaclient.client [-] REQ: curl -i
>> http://192.168.124.53:8774/v1.1/ -X GET -H "X-Auth-Key:
>> bf22b691-2581-4b2c-80e3-808fdd5dad4c" -H "X-Auth-User: cloudroot" -H
>> "User-Agent: python-novaclient"
>>  from (pid=1118) http_log
>> /usr/lib/python2.7/dist-packages/novaclient/client.py:71
>> 2011-10-03 13:16:02,047 DEBUG novaclient.client [-] RESP:{'status':
>> '204', 'content-length': '0', 'x-auth-token':
>> '40c6cb586ae04e2bf408da0e1f0a79a94ceed53b', 'x-cdn-management-url':
>> '', 'x-server-management-url':
>> 'http://192.168.124.53:8774/v1.1/cloudproject', 'date': 'Mon, 03 Oct
>> 2011 11:16:00 GMT', 'x-storage-url': '', 'content-type': 'text/plain;
>> charset=UTF-8'}
>>  from (pid=1118) http_log
>> /usr/lib/python2.7/dist-packages/novaclient/client.py:74
>> 2011-10-03 13:16:02,209 DEBUG novaclient.client [-] REQ: curl -i
>> http://192.168.124.53:8774/v1.1/cloudproject/zones/info?fresh=1317640562.01
>> -X GET -H "User-Agent: python-novaclient" -H "X-Auth-Token:
>> 40c6cb586ae04e2bf408da0e1f0a79a94ceed53b"
>>  from (pid=1118) http_log
>> /usr/lib/python2.7/dist-packages/novaclient/client.py:71
>> 2011-10-03 13:16:02,209 DEBUG novaclient.client [-] RESP:{'date':
>> 'Mon, 03 Oct 2011 11:16:01 GMT', 'status': '200', 'content-length':
>> '78', 'content-type': 'application/json', 'content-location':
>> 'http://192.168.124.53:8774/v1.1/cloudproject/zones/info?fresh=1317640562.01'}
>> {"zone": {"hypervisor": "xenserver;kvm", "os": "linux;windows",
>> "name": "h1"}}
>>
>>
>> 2. But the run_instance command is not nested correctly:
>>
>> 2011-10-03 13:16:43,266 DEBUG nova.scheduler.abstract_scheduler [-]
>> Attempting to build 1 instance(s) from (pid=1118)
>> schedule_run_instance
>> /usr/lib/python2.7/dist-packages/nova/scheduler/abstract_scheduler.py:226
>>
>>
>> 3. It seems that after executing the zone-boot command the scheduler
>> state is not correct:
>>
>>>>sudo nova-manage service list
>> Binary           Host                                 Zone
>> Status     State Updated_At
>> nova-compute     europe1.ostack.ds                    nova
>> disabled   :-)   2011-10-03 11:28:19
>> nova-scheduler   europe1.ostack.ds                    nova
>> enabled    XXX   2011-10-03 11:16:35
>> nova-network     europe1.ostack.ds                    nova
>> enabled    :-)   2011-10-03 11:28:19
>>
>> Can anyone please help me ? Any suggestions?
>>
>> Thank's in advance
>>
>> Pedro Navarro Pérez
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>> This email may include confidential information. If you received it in error, please delete it.
>>
>>
>




More information about the Openstack mailing list