[Openstack] Quantum/Grizzly - Instance doesn't get IP

happy idea guolongcang.work at gmail.com
Thu Sep 5 01:54:40 UTC 2013


well. my suggestion is to follow this guide
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
,
 and reinstall your openstall completely.

 If you are using Ubuntu 13.04:
after upgrade/dist-upgrade you have to reboot the host ,  now the system
had integreted Linux target framework , so now you shoudn't install the
package iscsitarget and iscsitarget-dkms on the controller node , and the
cinder conf file shoud like this:

 /etc/cinder/cinder.conf :

[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinderUser:cinderPass@10.10.10.51/cinder
api_paste_config = /etc/cinder/api-paste.iniiscsi_helper=tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
iscsi_ip_address=10.10.10.51

rabbit_host = 10.10.10.51



2013/9/4 Marcelo Dieder <marcelodieder at gmail.com>

>  Hi Happy, thanks for your reply.
>
> I checked this guide, but I already have dhcp-agent installed.
>
> root at network:~# apt-get -y install quantum-plugin-openvswitch-agent
> quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> quantum-dhcp-agent is already the newest version.
> quantum-l3-agent is already the newest version.
> quantum-metadata-agent is already the newest version.
> quantum-plugin-openvswitch-agent is already the newest version.
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
>
> I also checked the settings that this guide and I added some options that
> I had not configured. Now the error "No active DHCP agents" doesn't appears
> more, but the error (in the Controler (quantum-server)) still appears below:
>
> 2013-09-04 08:25:48  WARNING [quantum.db.agentschedulers_db] Fail
> scheduling network {'status': u'ACTIVE', 'subnets':
> [u'80b21701-4b05-4585-985a-60905ff42531'], 'name': u'public',
> 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id':
> u'27d2b93f11ac4e91a3edb26edb28fb6b', 'provider:network_type': u'gre',
> 'router:external': True, 'shared': False, 'id':
> u'b3e465b7-b5a2-45d5-8b24-aa8bea0ab0a0', 'provider:segmentation_id': 2L}
>
> And in all hosts (computer, networking and controller), I have the error:
>
> # quantum agent-list
> Unknown command ['agent-list']
>
> At my network node:
>
> tail -n 10 /var/log/openvswitch/ovs-vswitchd.log
> Sep 04 09:27:23|09614|netdev_linux|WARN|ioctl(SIOCGIFINDEX) on
> tap91e05e25-7f device failed: No such device
> Sep 04 09:27:32|09615|netdev|WARN|Dropped 253 log messages in last 12
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 04 09:27:32|09616|netdev|WARN|failed to get flags for network device
> tap91e05e25-7f: No such device
> Sep 04 09:27:33|09617|netdev_linux|WARN|Dropped 7 log messages in last 10
> seconds (most recently, 5 seconds ago) due to excessive rate
> Sep 04 09:27:33|09618|netdev_linux|WARN|ioctl(SIOCGIFINDEX) on
> tap91e05e25-7f device failed: No such device
> Sep 04 09:27:44|09619|netdev|WARN|Dropped 231 log messages in last 12
> seconds (most recently, 1 seconds ago) due to excessive rate
> Sep 04 09:27:44|09620|netdev|WARN|failed to get flags for network device
> tap91e05e25-7f: No such device
> Sep 04 09:27:48|09621|netdev_linux|WARN|Dropped 11 log messages in last 15
> seconds (most recently, 5 seconds ago) due to excessive rate
> Sep 04 09:27:48|09622|netdev_linux|WARN|ioctl(SIOCGIFINDEX) on
> tap91e05e25-7f device failed: No such device
>
> (quantum) net-list
>
> +--------------------------------------+----------+-------------------------------------------------------+
> | id                                   | name     |
> subnets                                               |
>
> +--------------------------------------+----------+-------------------------------------------------------+
> | a8f7c937-e8d0-4952-bff6-7d364335df22 | demo-net |
> fd7f324c-25ec-4134-ab97-e827fcf12824 10.5.5.0/24      |
> | b3e465b7-b5a2-45d5-8b24-aa8bea0ab0a0 | public   |
> 80b21701-4b05-4585-985a-60905ff42531 xxx.xxx.xxx.0/24 |
>
> +--------------------------------------+----------+-------------------------------------------------------+
> (quantum) port-list
>
> +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
> | id                                   | name | mac_address       |
> fixed_ips
> |
>
> +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
> | 65c5b6cf-5eef-483e-8b32-8cd07bb83e6d |      | fa:16:3e:58:29:31 |
> {"subnet_id": "80b21701-4b05-4585-985a-60905ff42531", "ip_address":
> "xxx.xxx.xxx.166"} |
> | 806b3d3e-35fe-4356-a833-a8a3d44ec9ca |      | fa:16:3e:7c:00:24 |
> {"subnet_id": "fd7f324c-25ec-4134-ab97-e827fcf12824", "ip_address":
> "10.5.5.1"}        |
> | 91e05e25-7f7b-4399-9da2-42fef05afe31 |      | fa:16:3e:fe:63:6c |
> {"subnet_id": "fd7f324c-25ec-4134-ab97-e827fcf12824", "ip_address":
> "10.5.5.2"}        |
> | a9043279-dd30-40b7-a1e3-8d340c5408c3 |      | fa:16:3e:71:60:57 |
> {"subnet_id": "80b21701-4b05-4585-985a-60905ff42531", "ip_address":
> "xxx.xxx.xxx.165"} |
>
> +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
> (quantum) router-list
>
> +--------------------------------------+-------------+--------------------------------------------------------+
> | id                                   | name        |
> external_gateway_info                                  |
>
> +--------------------------------------+-------------+--------------------------------------------------------+
> | 3a703f92-e1d5-4f06-8970-8ae899d40a99 | demo-router | {"network_id":
> "b3e465b7-b5a2-45d5-8b24-aa8bea0ab0a0"} |
>
> +--------------------------------------+-------------+--------------------------------------------------------+
> (quantum) subnet-list
>
> +--------------------------------------+-----------------+------------------+--------------------------------------------------------+
> | id                                   | name            |
> cidr             | allocation_pools                                       |
>
> +--------------------------------------+-----------------+------------------+--------------------------------------------------------+
> | 80b21701-4b05-4585-985a-60905ff42531 | public-subnet   |
> xxx.xxx.xxx.0/24 | {"start": "xxx.xxx.xxx.165", "end": "xxx.xxx.xxx.170"} |
> | fd7f324c-25ec-4134-ab97-e827fcf12824 | demo-net-subnet | 10.5.5.0/24
> | {"start": "10.5.5.2", "end": "10.5.5.254"}             |
>
> +--------------------------------------+-----------------+------------------+--------------------------------------------------------+
> (quantum)
>
> Any more suggestions?
>
> Thanks!
> Marcelo Dieder
>
>
> On 09/04/2013 02:13 AM, happy idea wrote:
>
> you didn't install the dhcp agent,  please refer to this guide
> https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
>
>
> 2013/9/4 Marcelo Dieder <marcelodieder at gmail.com>
>
>>  Hi All!
>>
>> I have a ambient with 3 hosts (Network, Controller and Node1 (Qemu)). I
>> created an ambient based this tutorial (
>> http://docs.openstack.org/grizzly/basic-install/apt/content/basic-install_controller.html).
>> My problem is when I create a instance. The instance Instance doesn't get
>> IP address.
>>
>> checking http://169.254.169.254/20090404/instanceid
>> failed 1/20: up 187.68. request failed
>> failed 2/20: up 190.06. request failed
>> failed 3/20: up 192.24. request failed
>> failed 4/20: up 194.43. request failed
>> failed 5/20: up 196.61. request failed
>> failed 6/20: up 198.82. request failed
>> failed 7/20: up 201.03. request failed
>> failed 8/20: up 203.22. request failed
>> failed 9/20: up 205.42. request failed
>> failed 10/20: up 207.64. request failed
>> failed 11/20: up 209.87. request failed
>> failed 12/20: up 212.08. request failed
>> failed 13/20: up 214.29. request failed
>> failed 14/20: up 216.49. request failed
>> failed 15/20: up 218.70. request failed
>> failed 16/20: up 220.91. request failed
>> failed 17/20: up 223.13. request failed
>> failed 18/20: up 225.38. request failed
>> failed 19/20: up 227.62. request failed
>> failed 20/20: up 229.87. request failed
>> failed to read iid from metadata. tried 20
>> no results found for mode=net. up 232.10. searched: nocloud configdrive ec2
>> failed to get instanceid of datasource
>> Starting dropbear sshd: generating rsa key... generating dsa key... OK
>> === network info ===
>> ifinfo: lo,up,127.0.0.1,8,::1
>> ifinfo: eth0,up,,8,fe80::f816:3eff:fef3:2a6d
>> === datasource: None None ===
>>
>>
>> At the controller I received the warning:
>>
>>
>> 2013-09-04 00:40:44  WARNING [quantum.scheduler.dhcp_agent_scheduler] No
>> active DHCP agents
>> 2013-09-04 00:40:44  WARNING [quantum.db.agentschedulers_db] Fail
>> scheduling network {'status': u'ACTIVE', 'subnets':
>> [u'80b21701-4b05-4585-985a-60905ff42531'], 'name': u'public',
>> 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id':
>> u'27d2b93f11ac4e91a3edb26edb28fb6b', 'provider:network_type': u'gre',
>> 'router:external': True, 'shared': False, 'id':
>> u'b3e465b7-b5a2-45d5-8b24-aa8bea0ab0a0', 'provider:segmentation_id': 2L}
>>
>> And when I executed:
>>
>>
>> root at cloud:~# quantum agent-list
>> Unknown command ['agent-list']
>>
>> Other commands:
>>
>> root at cloud:~# nova-manage service list
>> Binary           Host                                 Zone
>> Status     State Updated_At
>> nova-cert        cloud                                internal
>> enabled    :-)   2013-09-04 03:59:12
>> nova-consoleauth cloud                                internal
>> enabled    :-)   2013-09-04 03:59:12
>> nova-scheduler   cloud                                internal
>> enabled    :-)   2013-09-04 03:59:12
>> nova-conductor   cloud                                internal
>> enabled    :-)   2013-09-04 03:59:12
>> nova-compute     c01                                  nova
>> enabled    :-)   2013-09-04 03:59:04
>>
>> root at c01:~# nova list
>>
>> +--------------------------------------+---------+--------+------------------------+
>> | ID                                   | Name    | Status |
>> Networks               |
>>
>> +--------------------------------------+---------+--------+------------------------+
>> | 2c704622-1b5f-4651-9553-51aabee9090c | test29 | ACTIVE |
>> public=xxx.xxx.xxx.xxx |
>>
>> I searched but I couldn't find any resolution. Anybody has this problem?
>>
>> Cheers.
>>
>> Marcelo Dieder
>>
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130905/d7cbcbdc/attachment.html>


More information about the Openstack mailing list