[openstack-dev] [Fuel][Neutron ML2][VMWare]NetworkNotFoundForBridge: Network could not be found for bridge br-int

Foss Geek thefossgeek at gmail.com
Fri Jan 9 21:08:44 UTC 2015


Dear All,

I am trying to integrate Openstack + vCenter + Neutron + VMware dvSwitch
ML2 Mechanism driver.

I deployed a two node openstack environment (controller + compute with KVM)
with Neutron VLAN + KVM using fuel 5.1. Again I installed nova-compute
using yum in controller node and configured nova-compute in controller to
point vCenter. I am also using Neutron VLAN with VMware dvSwitch ML2
Mechanism driver. My vCenter is properly configured as suggested by the
doc:
https://www.mirantis.com/blog/managing-vmware-vcenter-resources-mirantis-openstack-5-0-part-1-create-vsphere-cluster/

I am able to create network from Horizon and I can see the same network
created in vCenter. When I try to create a VM I am getting the below error
in Horizon.

Error: Failed to launch instance "test-01": Please try again later [Error:
No valid host was found. ].

Here is the error message from Instance Overview tab:

Instance Overview
Info
Name
test-01
ID
309a1f47-83b6-4ab4-9d71-642a2000c8a1
Status
Error
Availability Zone
nova
Created
Jan. 9, 2015, 8:16 p.m.
Uptime
0 minutes
Fault
Message
No valid host was found.
Code
500
Details
File "/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py",
line 108, in schedule_run_instance raise exception.NoValidHost(reason="")
Created
Jan. 9, 2015, 8:16 p.m

Getting the below error in nova-all.log:


<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.135 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Authenticating user token
__call__
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:676
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.136 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Removing headers from request
environment:
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
_remove_auth_headers
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:733
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.137 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Returning cached token
_cache_get
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1545
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.138 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Storing token in cache store
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1460
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.139 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Received request from user:
4564fea80fa14e1daed160afa074d389 with project_id :
dd32714d9009495bb51276e284380d6a and roles: admin,_member_
 _build_user_headers
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:996
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.141 31870 DEBUG
routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Matched GET
/dd32714d9009495bb51276e284380d6a/servers/309a1f47-83b6-4ab4-9d71-642a2000c8a1
__call__ /usr/lib/python2.6/site-packages/routes/middleware.py:100
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG
routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Route path:
'/{project_id}/servers/:(id)', defaults: {'action': u'show', 'controller':
<nova.api.openstack.wsgi.Resource object at 0x43e2550>} __call__
/usr/lib/python2.6/site-packages/routes/middleware.py:102
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG
routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Match dict:
{'action': u'show', 'controller': <nova.api.openstack.wsgi.Resource object
at 0x43e2550>, 'project_id': u'dd32714d9009495bb51276e284380d6a', 'id':
u'309a1f47-83b6-4ab4-9d71-642a2000c8a1'} __call__
/usr/lib/python2.6/site-packages/routes/middleware.py:103
<183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.143 31870 DEBUG
nova.api.openstack.wsgi [req-05089e83-e4c1-4d90-b7c5-065226e55d91 None]
Calling method '<bound method Controller.show of
<nova.api.openstack.compute.servers.Controller object at 0x4204290>>'
(Content-type='None', Accept='application/json') _process_stack
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:945
<183>Jan  9 20:16:23 node-18 nova-compute 2015-01-09 20:16:23.170 29111
DEBUG nova.virt.vmwareapi.network_util
[req-27cf4cd7-9184-4d7e-b57a-19ef3caeef26 None] Network br-int not found on
host! get_network_with_the_name
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/network_util.py:80
<179>Jan  9 20:16:23 node-18 nova-compute 2015-01-09 20:16:23.171 29111
ERROR nova.compute.manager [req-27cf4cd7-9184-4d7e-b57a-19ef3caeef26 None]
[instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1] Instance failed to spawn
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1] Traceback (most recent call last):
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1714, in
_spawn
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]     block_device_info)
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 626,
in spawn
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]     admin_password, network_info,
block_device_info)
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 285,
in spawn
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]     vif_infos = _get_vif_infos()
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 276,
in _get_vif_infos
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]     self._is_neutron)
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vif.py", line 146, in
get_network_ref
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]     network_ref =
get_neutron_network(session, network_name, cluster, vif)
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vif.py", line 138, in
get_neutron_network
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]     raise
exception.NetworkNotFoundForBridge(bridge=bridge)
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1] NetworkNotFoundForBridge: Network
could not be found for bridge br-int
2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance:
309a1f47-83b6-4ab4-9d71-642a2000c8a1]


# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v ^# | grep -v ^$
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch,dvs
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = physnet1:3000:3999,physnet2
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[agent]
l2_population=False
polling_interval=2
arp_responder=False
[ovs]
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet1:br-ex
[ml2_vmware]
host_ip=<vcenter_ip>
host_username=root
host_password=<password>
wsdl_location=file:///opt/vmware/vimService.wsdl
task_poll_interval=5.0
api_retry_count=10
network_maps = physnet1:dvSwitch


# cat /etc/neutron/plugins/ml2/ml2_conf_vmware_dvs.ini | grep -v ^# | grep
-v ^$
[ml2_vmware]
host_ip=<vcenter_ip>
host_username=root
host_password=<password>
wsdl_location=file:///opt/vmware/vimService.wsdl
task_poll_interval=5.0
api_retry_count=10
network_maps = physnet1:dvSwitch


# ovs-vsctl show
80248645-469e-4b64-9408-7d26efce777f
    Bridge "br-eth3"
        Port "br-eth3"
            Interface "br-eth3"
                type: internal
        Port "eth3"
            Interface "eth3"
    Bridge br-int
        fail_mode: secure
        Port "tape9c03794-63"
            tag: 2
            Interface "tape9c03794-63"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
        Port int-br-prv
            Interface int-br-prv
    Bridge br-ex
        Port "br-ex--br-eth2"
            trunks: [0]
            Interface "br-ex--br-eth2"
                type: patch
                options: {peer="br-eth2--br-ex"}
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
    Bridge br-storage
        Port "br-storage--br-eth0"
            Interface "br-storage--br-eth0"
                type: patch
                options: {peer="br-eth0--br-storage"}
        Port br-storage
            Interface br-storage
                type: internal
    Bridge br-mgmt
        Port br-mgmt
            Interface br-mgmt
                type: internal
        Port "br-mgmt--br-eth0"
            Interface "br-mgmt--br-eth0"
                type: patch
                options: {peer="br-eth0--br-mgmt"}
    Bridge "br-eth0"
        Port "br-eth0"
            Interface "br-eth0"
                type: internal
        Port "br-eth0--br-storage"
            tag: 102
            Interface "br-eth0--br-storage"
                type: patch
                options: {peer="br-storage--br-eth0"}
        Port "br-eth0--br-mgmt"
            tag: 101
            Interface "br-eth0--br-mgmt"
                type: patch
                options: {peer="br-mgmt--br-eth0"}
        Port "br-eth0--br-prv"
            Interface "br-eth0--br-prv"
                type: patch
                options: {peer="br-prv--br-eth0"}
        Port "br-eth0--br-fw-admin"
            trunks: [0]
            Interface "br-eth0--br-fw-admin"
                type: patch
                options: {peer="br-fw-admin--br-eth0"}
        Port "eth0"
            Interface "eth0"
    Bridge "br-eth2"
        Port "eth2"
            Interface "eth2"
        Port "br-eth2"
            Interface "br-eth2"
                type: internal
        Port "br-eth2--br-ex"
            trunks: [0]
            Interface "br-eth2--br-ex"
                type: patch
                options: {peer="br-ex--br-eth2"}
    Bridge "br-eth1"
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-prv
        Port "br-prv--br-eth0"
            Interface "br-prv--br-eth0"
                type: patch
                options: {peer="br-eth0--br-prv"}
        Port "qg-de0a02f9-d2"
            Interface "qg-de0a02f9-d2"
                type: internal
        Port br-prv
            Interface br-prv
                type: internal
        Port phy-br-prv
            Interface phy-br-prv
    Bridge br-fw-admin
        Port br-fw-admin
            Interface br-fw-admin
                type: internal
        Port "br-fw-admin--br-eth0"
            trunks: [0]
            Interface "br-fw-admin--br-eth0"
                type: patch
                options: {peer="br-eth0--br-fw-admin"}
    ovs_version: "1.10.2"


# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000
    link/ether 14:fe:b5:0f:b6:79 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000
    link/ether 14:fe:b5:0f:b6:7b brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000
    link/ether 14:fe:b5:0f:b6:7d brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000
    link/ether 14:fe:b5:0f:b6:7f brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 6a:26:28:63:48:52 brd ff:ff:ff:ff:ff:ff
7: br-eth3: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 14:fe:b5:0f:b6:7f brd ff:ff:ff:ff:ff:ff
8: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether a6:3d:66:56:16:40 brd ff:ff:ff:ff:ff:ff
9: br-eth1: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 14:fe:b5:0f:b6:7b brd ff:ff:ff:ff:ff:ff
10: br-int: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 8e:6a:fb:1f:18:47 brd ff:ff:ff:ff:ff:ff
14: br-fw-admin: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN
    link/ether e6:a1:ea:f3:0f:45 brd ff:ff:ff:ff:ff:ff
15: br-storage: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 42:a0:c7:5e:45:4d brd ff:ff:ff:ff:ff:ff
16: br-eth2: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 14:fe:b5:0f:b6:7d brd ff:ff:ff:ff:ff:ff
17: br-prv: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 16:23:fe:ec:eb:4f brd ff:ff:ff:ff:ff:ff
19: br-eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 14:fe:b5:0f:b6:79 brd ff:ff:ff:ff:ff:ff
20: br-mgmt: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether b6:9c:f9:60:a3:40 brd ff:ff:ff:ff:ff:ff
22: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN
    link/ether 92:c8:0e:96:13:db brd ff:ff:ff:ff:ff:ff
33: phy-br-prv: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether b2:29:ee:f4:86:16 brd ff:ff:ff:ff:ff:ff
34: int-br-prv: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether 6e:c4:d3:3e:c2:11 brd ff:ff:ff:ff:ff:ff
57: phy-br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether 96:8b:87:06:4b:e3 brd ff:ff:ff:ff:ff:ff
58: int-br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether fe:08:e6:ba:bf:d3 brd ff:ff:ff:ff:ff:ff


# brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.000000000000       yes

I guess I am missing some thing.

It looks like my issue is similar to this :
https://ask.openstack.org/en/question/43594/vmware-neutron-bridging-problem/

I have configured br100 with VLAN ID 103 in vCenter. But I don't have br100
in my controller node. Not sure how to create it in my controller.

*NOTE :* I have another openstack environment which I deployed as vCenter
Environment using Fuel 5.1 and manually installed/configured Neutron +
VMware dvSwitch ML2 Mechanism driver. It works fine with same vCenter.

Any help?

I happy to provide more info if required.

-- 
Thanks & Regards
E-Mail: thefossgeek at gmail.com
IRC: neophy
Blog : http://lmohanphy.livejournal.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150110/251130e9/attachment.html>


More information about the OpenStack-dev mailing list