[Openstack-operators] Fail scheduling network
Samuel Winchenbach
swinchen at gmail.com
Tue Jun 25 13:26:29 UTC 2013
Hi everyone,
I am getting this warning (which seems more like an error) when trying to
launch instances:
/var/log/quantum/server.log:2013-06-24 09:34:03 WARNING
[quantum.db.agentschedulers_db] Fail scheduling network {'status':
u'ACTIVE', 'subnets': [u'19f681a4-99b0-4e45-85eb-6d08aa77cedb'], 'name':
u'demo-net', 'provider:physical_network': None, 'admin_state_up': True,
'tenant_id': u'c9ec52b4506b4d789180af84b78da5b1', 'provider:network_type':
u'gre', 'router:external': False, 'shared': False, 'id':
u'7a753d4b-b4dc-423a-98c2-79a1cbeb3d15', 'provider:segmentation_id': 2L}
The instance launches (on a different node) and seems to work correctly. I
have gone over my quantum configuration on both nodes and all the IP's and
hostnames seem to have been set correctly (I can post configs if handy). I
really have no idea how to debug this... it doesn't really give me a lot to
go on.
Here is my setup: I have three "all-in-one nodes" (test1,test2,test3) that
each run all of the services (except l3-agent... that only runs on one
node at a time). The APIs, novnc-proxy, and mysql (actually Percona/Galera
Cluster) are load balanced with haproxy. All services have their APIs
bound to the internal IP. haproxy is bound to a VIP that can move around
with pacemaker.
In the example above a request comes into haproxy on test2. quantum-server
spits out the warning above on test2, test1 launches the instance and seems
to work fine (without any warnings or errors). Oddly, test3 never seems
to even try to launch any instances. Here is a list of my nova services
and quantum agents:
root at test1:~# quantum agent-list
+--------------------------------------+--------------------+-------+-------+----------------+
| id | agent_type | host | alive
| admin_state_up |
+--------------------------------------+--------------------+-------+-------+----------------+
| 0dfc68a8-7321-4d6c-a266-a4ffef5f9a33 | DHCP agent | test3 | :-)
| True |
| 2ca7d965-c292-48a3-805c-11afebf18e20 | DHCP agent | test1 | :-)
| True |
| 2f2c3259-4f63-4e7e-9552-fbc2ed69281e | L3 agent | test2 | :-)
| True |
| 5a76fbee-47d0-4dc1-af50-24dfb6113400 | Open vSwitch agent | test1 | :-)
| True |
| 7c6c4058-c9c2-4774-8924-ab6ba54266b3 | DHCP agent | test2 | :-)
| True |
| 7d01c7b2-1102-4249-85a0-7afcd9421884 | Open vSwitch agent | test2 | :-)
| True |
| bde82424-b5ff-41b7-9d7e-35bf805cfae8 | Open vSwitch agent | test3 | :-)
| True |
| dcb122ab-8c17-4f60-9b3a-41abfdf036c3 | L3 agent | test1 | xxx
| True |
+--------------------------------------+--------------------+-------+-------+----------------+
root at test1:~# nova-manage service list
Binary Host Zone
Status State Updated_At
nova-cert test1 internal
enabled :-) 2013-06-24 14:26:11
nova-conductor test1 internal
enabled :-) 2013-06-24 14:26:12
nova-consoleauth test1 internal
enabled :-) 2013-06-24 14:26:07
nova-scheduler test1 internal
enabled :-) 2013-06-24 14:26:11
nova-compute test1 nova
enabled :-) 2013-06-24 14:26:08
nova-cert test2 internal
enabled :-) 2013-06-24 14:26:06
nova-conductor test2 internal
enabled :-) 2013-06-24 14:26:11
nova-consoleauth test2 internal
enabled :-) 2013-06-24 14:26:12
nova-scheduler test2 internal
enabled :-) 2013-06-24 14:26:12
nova-compute test2 nova
enabled :-) 2013-06-24 14:26:07
nova-cert test3 internal
enabled :-) 2013-06-24 14:26:11
nova-consoleauth test3 internal
enabled :-) 2013-06-24 14:26:14
nova-scheduler test3 internal
enabled :-) 2013-06-24 14:26:11
nova-conductor test3 internal
enabled :-) 2013-06-24 14:26:12
nova-compute test3 nova
enabled :-) 2013-06-24 14:26:08
Any ideas what might be happening? Again, I can post additional logs,
configs, etc. Whatever might help me get past this problem.
Thanks,
Sam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130625/b722c7ce/attachment.html>
More information about the OpenStack-operators
mailing list