[Openstack] Quantum/Grizzy - Instance doesn't get IP

Nicolae Paladi n.paladi at gmail.com
Fri Sep 6 10:20:38 UTC 2013


Hi Marcelo,

I have the same issue (I'm on CentOS 6.4 though); have you found a solution?

There was a similar thread earler:
http://openstack.redhat.com/forum/discussion/230/warning-quantum-db-agentschedulers_db-fail-scheduling-network/p1
Make sure that all agents are up in 'quantum agent-list'

Also, in your quantum/server.log, do you get something like:
WARNING [quantum.api.extensions] Extension routed-service-insertion not
supported by any of loaded plugins

I am trying to understand if this is a related problem or something that
can be ignored atm;

I can say that after some fiddling with the quantum dhcp agents my
instances were getting an IP address and I could reach them yesterday, but
apparently that wasn't very stable and today I'm back with the same issue.

cheers,
/Nicolae


On 5 September 2013 03:42, happy idea <guolongcang.work at gmail.com> wrote:

> Are you sure you had follow this page'guide ?
> http://docs.openstack.org/grizzly/basic-install/apt/content/basic-install_network.html
>
>
> 2013/9/5 Marcelo Dieder <marcelodieder at gmail.com>
>
>>  Hi, yes, I have dnsmasq installed on Network Node.
>>
>> root at network:~# apt-get install dnsmasq
>> Reading package lists... Done
>> Building dependency tree
>> Reading state information... Done
>> dnsmasq is already the newest version.
>> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
>>
>>
>> root at network:~# ps aux | grep -i dnsmasq
>>
>> dnsmasq   3807  0.0  0.1  28820   980 ?        S    15:29   0:00
>> /usr/sbin/dnsmasq -x /var/run/dnsmasq/dnsmasq.pid -u dnsmasq -r
>> /var/run/dnsmasq/resolv.conf -7
>> /etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new
>>
>> nobody   26040  0.0  0.2  28820  1004 ?        S    15:45   0:00 dnsmasq
>> --no-hosts --no-resolv --strict-order --bind-interfaces
>> --interface=tap91e05e25-7f --except-interface=lo
>> --pid-file=/var/lib/quantum/dhcp/a8f7c937-e8d0-4952-bff6-7d364335df22/pid
>> --dhcp-hostsfile=/var/lib/quantum/dhcp/a8f7c937-e8d0-4952-bff6-7d364335df22/host
>> --dhcp-optsfile=/var/lib/quantum/dhcp/a8f7c937-e8d0-4952-bff6-7d364335df22/opts
>> --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update
>> --leasefile-ro --dhcp-range=set:tag0,10.5.5.0,static,120s --conf-file=
>> --domain=openstacklocal
>>
>> root     26041  0.0  0.0  28792   244 ?        S    15:45   0:00 dnsmasq
>> --no-hosts --no-resolv --strict-order --bind-interfaces
>> --interface=tap91e05e25-7f --except-interface=lo
>> --pid-file=/var/lib/quantum/dhcp/a8f7c937-e8d0-4952-bff6-7d364335df22/pid
>> --dhcp-hostsfile=/var/lib/quantum/dhcp/a8f7c937-e8d0-4952-bff6-7d364335df22/host
>> --dhcp-optsfile=/var/lib/quantum/dhcp/a8f7c937-e8d0-4952-bff6-7d364335df22/opts
>> --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update
>> --leasefile-ro --dhcp-range=set:tag0,10.5.5.0,static,120s --conf-file=
>> --domain=openstacklocal
>>
>> I restarted the service dnsmasq, but occured the same problem when I
>> started a new instance.
>>
>> 2013-09-04 15:39:44  WARNING [quantum.db.agentschedulers_db] Fail
>> scheduling network {'status': u'ACTIVE', 'subnets':
>> [u'80b21701-4b05-4585-985a-60905ff42531'], 'name': u'public',
>> 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id':
>> u'27d2b93f11ac4e91a3edb26edb28fb6b', 'provider:network_type': u'gre',
>> 'router:external': True, 'shared': False, 'id':
>> u'b3e465b7-b5a2-45d5-8b24-aa8bea0ab0a0', 'provider:segmentation_id': 2L}
>>
>> 2013-09-04 15:47:00  WARNING [quantum.db.agentschedulers_db] Fail
>> scheduling network {'status': u'ACTIVE', 'subnets':
>> [u'80b21701-4b05-4585-985a-60905ff42531'], 'name': u'public',
>> 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id':
>> u'27d2b93f11ac4e91a3edb26edb28fb6b', 'provider:network_type': u'gre',
>> 'router:external': True, 'shared': False, 'id':
>> u'b3e465b7-b5a2-45d5-8b24-aa8bea0ab0a0', 'provider:segmentation_id': 2L}
>>
>> Thanks.
>> Marcelo Dieder
>>
>>
>> On 09/04/2013 11:15 AM, Hathaway.Jon wrote:
>>
>> Do you have dnsmasq installed? I found that it isnt installed as a
>> dependency. Without it i never received dhcp either.
>>
>> Sent from my iPhone
>>
>> On Sep 3, 2013, at 10:31 PM, "happy idea" <guolongcang.work at gmail.com>
>> wrote:
>>
>>   you didn't install the dhcp agent,  please refer to this guide
>> https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
>>
>>
>> 2013/9/4 Marcelo Dieder <marcelodieder at gmail.com>
>>
>>>  Hi All!
>>>
>>> I have a ambient with 3 hosts (Network, Controller and Node1 (Qemu)). I
>>> created an ambient based this tutorial (
>>> http://docs.openstack.org/grizzly/basic-install/apt/content/basic-install_controller.html).
>>> My problem is when I create a instance. The instance Instance doesn't get
>>> IP address.
>>>
>>> checking http://169.254.169.254/20090404/instanceid
>>> failed 1/20: up 187.68. request failed
>>> failed 2/20: up 190.06. request failed
>>> failed 3/20: up 192.24. request failed
>>> failed 4/20: up 194.43. request failed
>>> failed 5/20: up 196.61. request failed
>>> failed 6/20: up 198.82. request failed
>>> failed 7/20: up 201.03. request failed
>>> failed 8/20: up 203.22. request failed
>>> failed 9/20: up 205.42. request failed
>>> failed 10/20: up 207.64. request failed
>>> failed 11/20: up 209.87. request failed
>>> failed 12/20: up 212.08. request failed
>>> failed 13/20: up 214.29. request failed
>>> failed 14/20: up 216.49. request failed
>>> failed 15/20: up 218.70. request failed
>>> failed 16/20: up 220.91. request failed
>>> failed 17/20: up 223.13. request failed
>>> failed 18/20: up 225.38. request failed
>>> failed 19/20: up 227.62. request failed
>>> failed 20/20: up 229.87. request failed
>>> failed to read iid from metadata. tried 20
>>> no results found for mode=net. up 232.10. searched: nocloud configdrive ec2
>>> failed to get instanceid of datasource
>>> Starting dropbear sshd: generating rsa key... generating dsa key... OK
>>> === network info ===
>>> ifinfo: lo,up,127.0.0.1,8,::1
>>> ifinfo: eth0,up,,8,fe80::f816:3eff:fef3:2a6d
>>> === datasource: None None ===
>>>
>>>
>>> At the controller I received the warning:
>>>
>>>
>>> 2013-09-04 00:40:44  WARNING [quantum.scheduler.dhcp_agent_scheduler] No
>>> active DHCP agents
>>> 2013-09-04 00:40:44  WARNING [quantum.db.agentschedulers_db] Fail
>>> scheduling network {'status': u'ACTIVE', 'subnets':
>>> [u'80b21701-4b05-4585-985a-60905ff42531'], 'name': u'public',
>>> 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id':
>>> u'27d2b93f11ac4e91a3edb26edb28fb6b', 'provider:network_type': u'gre',
>>> 'router:external': True, 'shared': False, 'id':
>>> u'b3e465b7-b5a2-45d5-8b24-aa8bea0ab0a0', 'provider:segmentation_id': 2L}
>>>
>>> And when I executed:
>>>
>>>
>>> root at cloud:~# quantum agent-list
>>> Unknown command ['agent-list']
>>>
>>> Other commands:
>>>
>>> root at cloud:~# nova-manage service list
>>> Binary           Host                                 Zone
>>> Status     State Updated_At
>>> nova-cert        cloud                                internal
>>> enabled    :-)   2013-09-04 03:59:12
>>> nova-consoleauth cloud                                internal
>>> enabled    :-)   2013-09-04 03:59:12
>>> nova-scheduler   cloud                                internal
>>> enabled    :-)   2013-09-04 03:59:12
>>> nova-conductor   cloud                                internal
>>> enabled    :-)   2013-09-04 03:59:12
>>> nova-compute     c01                                  nova
>>> enabled    :-)   2013-09-04 03:59:04
>>>
>>> root at c01:~# nova list
>>>
>>> +--------------------------------------+---------+--------+------------------------+
>>> | ID                                   | Name    | Status |
>>> Networks               |
>>>
>>> +--------------------------------------+---------+--------+------------------------+
>>> | 2c704622-1b5f-4651-9553-51aabee9090c | test29 | ACTIVE |
>>> public=xxx.xxx.xxx.xxx |
>>>
>>> I searched but I couldn't find any resolution. Anybody has this problem?
>>>
>>> Cheers.
>>>
>>> Marcelo Dieder
>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>>   _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130906/05dbf6d7/attachment.html>


More information about the Openstack mailing list