<div dir="ltr"><div>Dear All,</div><div><br></div><div>I am trying to integrate Openstack + vCenter + Neutron + VMware dvSwitch ML2 Mechanism driver.<br></div><div><br></div><div>I deployed a two node openstack environment (controller + compute with KVM) with Neutron VLAN + KVM using fuel 5.1. Again I installed nova-compute using yum in controller node and configured nova-compute in controller to point vCenter. I am also using Neutron VLAN with VMware dvSwitch ML2 Mechanism driver. My vCenter is properly configured as suggested by the doc: <a href="https://www.mirantis.com/blog/managing-vmware-vcenter-resources-mirantis-openstack-5-0-part-1-create-vsphere-cluster/">https://www.mirantis.com/blog/managing-vmware-vcenter-resources-mirantis-openstack-5-0-part-1-create-vsphere-cluster/</a></div><div><br></div><div>I am able to create network from Horizon and I can see the same network created in vCenter. When I try to create a VM I am getting the below error in Horizon.</div><div><br></div><div>Error: Failed to launch instance "test-01": Please try again later [Error: No valid host was found. ].</div><div><br></div><div>Here is the error message from Instance Overview tab:</div><div><br></div><div>Instance Overview</div><div>Info</div><div>Name</div><div>test-01</div><div>ID</div><div>309a1f47-83b6-4ab4-9d71-642a2000c8a1</div><div>Status</div><div>Error</div><div>Availability Zone</div><div>nova</div><div>Created</div><div>Jan. 9, 2015, 8:16 p.m.</div><div>Uptime</div><div>0 minutes</div><div>Fault</div><div>Message</div><div>No valid host was found.</div><div>Code</div><div>500</div><div>Details</div><div>File "/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py", line 108, in schedule_run_instance raise exception.NoValidHost(reason="")</div><div>Created</div><div>Jan. 9, 2015, 8:16 p.m</div><div><br></div><div>Getting the below error in nova-all.log:</div><div><br></div><div><br></div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.135 31870 DEBUG keystoneclient.middleware.auth_token [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Authenticating user token __call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:676</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.136 31870 DEBUG keystoneclient.middleware.auth_token [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role _remove_auth_headers /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:733</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.137 31870 DEBUG keystoneclient.middleware.auth_token [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Returning cached token _cache_get /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1545</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.138 31870 DEBUG keystoneclient.middleware.auth_token [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Storing token in cache store /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1460</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.139 31870 DEBUG keystoneclient.middleware.auth_token [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Received request from user: 4564fea80fa14e1daed160afa074d389 with project_id : dd32714d9009495bb51276e284380d6a and roles: admin,_member_  _build_user_headers /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:996</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.141 31870 DEBUG routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Matched GET /dd32714d9009495bb51276e284380d6a/servers/309a1f47-83b6-4ab4-9d71-642a2000c8a1 __call__ /usr/lib/python2.6/site-packages/routes/middleware.py:100</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Route path: '/{project_id}/servers/:(id)', defaults: {'action': u'show', 'controller': <nova.api.openstack.wsgi.Resource object at 0x43e2550>} __call__ /usr/lib/python2.6/site-packages/routes/middleware.py:102</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Match dict: {'action': u'show', 'controller': <nova.api.openstack.wsgi.Resource object at 0x43e2550>, 'project_id': u'dd32714d9009495bb51276e284380d6a', 'id': u'309a1f47-83b6-4ab4-9d71-642a2000c8a1'} __call__ /usr/lib/python2.6/site-packages/routes/middleware.py:103</div><div><183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.143 31870 DEBUG nova.api.openstack.wsgi [req-05089e83-e4c1-4d90-b7c5-065226e55d91 None] Calling method '<bound method Controller.show of <nova.api.openstack.compute.servers.Controller object at 0x4204290>>' (Content-type='None', Accept='application/json') _process_stack /usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:945</div><div><183>Jan  9 20:16:23 node-18 nova-compute 2015-01-09 20:16:23.170 29111 DEBUG nova.virt.vmwareapi.network_util [req-27cf4cd7-9184-4d7e-b57a-19ef3caeef26 None] Network br-int not found on host! get_network_with_the_name /usr/lib/python2.6/site-packages/nova/virt/vmwareapi/network_util.py:80</div><div><179>Jan  9 20:16:23 node-18 nova-compute 2015-01-09 20:16:23.171 29111 ERROR nova.compute.manager [req-27cf4cd7-9184-4d7e-b57a-19ef3caeef26 None] [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1] Instance failed to spawn</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1] Traceback (most recent call last):</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1714, in _spawn</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]     block_device_info)</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 626, in spawn</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]     admin_password, network_info, block_device_info)</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 285, in spawn</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]     vif_infos = _get_vif_infos()</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 276, in _get_vif_infos</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]     self._is_neutron)</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vif.py", line 146, in get_network_ref</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]     network_ref = get_neutron_network(session, network_name, cluster, vif)</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]   File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vif.py", line 138, in get_neutron_network</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]     raise exception.NetworkNotFoundForBridge(bridge=bridge)</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1] NetworkNotFoundForBridge: Network could not be found for bridge br-int</div><div>2015-01-09 20:16:23.171 29111 TRACE nova.compute.manager [instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1]</div><div><br></div><div><br></div><div># cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v ^# | grep -v ^$<br></div><div>[ml2]</div><div>type_drivers = vlan</div><div>tenant_network_types = vlan</div><div>mechanism_drivers = openvswitch,dvs</div><div>[ml2_type_flat]</div><div>[ml2_type_vlan]</div><div>network_vlan_ranges = physnet1:3000:3999,physnet2</div><div>[ml2_type_gre]</div><div>[ml2_type_vxlan]</div><div>[securitygroup]</div><div>firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver</div><div>enable_security_group = True</div><div>[agent]</div><div>l2_population=False</div><div>polling_interval=2</div><div>arp_responder=False</div><div>[ovs]</div><div>enable_tunneling=False</div><div>integration_bridge=br-int</div><div>bridge_mappings=physnet1:br-ex</div><div>[ml2_vmware]</div><div>host_ip=<vcenter_ip></div><div>host_username=root</div><div>host_password=<password></div><div>wsdl_location=file:///opt/vmware/vimService.wsdl</div><div>task_poll_interval=5.0</div><div>api_retry_count=10</div><div>network_maps = physnet1:dvSwitch</div><div><br></div><div><br></div><div># cat /etc/neutron/plugins/ml2/ml2_conf_vmware_dvs.ini | grep -v ^# | grep -v ^$</div><div>[ml2_vmware]</div><div>host_ip=<vcenter_ip></div><div>host_username=root</div><div>host_password=<password></div><div>wsdl_location=file:///opt/vmware/vimService.wsdl</div><div>task_poll_interval=5.0</div><div>api_retry_count=10</div><div>network_maps = physnet1:dvSwitch</div><div><br></div><div><br></div><div># ovs-vsctl show</div><div>80248645-469e-4b64-9408-7d26efce777f</div><div>    Bridge "br-eth3"</div><div>        Port "br-eth3"</div><div>            Interface "br-eth3"</div><div>                type: internal</div><div>        Port "eth3"</div><div>            Interface "eth3"</div><div>    Bridge br-int</div><div>        fail_mode: secure</div><div>        Port "tape9c03794-63"</div><div>            tag: 2</div><div>            Interface "tape9c03794-63"</div><div>                type: internal</div><div>        Port br-int</div><div>            Interface br-int</div><div>                type: internal</div><div>        Port int-br-ex</div><div>            Interface int-br-ex</div><div>        Port int-br-prv</div><div>            Interface int-br-prv</div><div>    Bridge br-ex</div><div>        Port "br-ex--br-eth2"</div><div>            trunks: [0]</div><div>            Interface "br-ex--br-eth2"</div><div>                type: patch</div><div>                options: {peer="br-eth2--br-ex"}</div><div>        Port br-ex</div><div>            Interface br-ex</div><div>                type: internal</div><div>        Port phy-br-ex</div><div>            Interface phy-br-ex</div><div>    Bridge br-storage</div><div>        Port "br-storage--br-eth0"</div><div>            Interface "br-storage--br-eth0"</div><div>                type: patch</div><div>                options: {peer="br-eth0--br-storage"}</div><div>        Port br-storage</div><div>            Interface br-storage</div><div>                type: internal</div><div>    Bridge br-mgmt</div><div>        Port br-mgmt</div><div>            Interface br-mgmt</div><div>                type: internal</div><div>        Port "br-mgmt--br-eth0"</div><div>            Interface "br-mgmt--br-eth0"</div><div>                type: patch</div><div>                options: {peer="br-eth0--br-mgmt"}</div><div>    Bridge "br-eth0"</div><div>        Port "br-eth0"</div><div>            Interface "br-eth0"</div><div>                type: internal</div><div>        Port "br-eth0--br-storage"</div><div>            tag: 102</div><div>            Interface "br-eth0--br-storage"</div><div>                type: patch</div><div>                options: {peer="br-storage--br-eth0"}</div><div>        Port "br-eth0--br-mgmt"</div><div>            tag: 101</div><div>            Interface "br-eth0--br-mgmt"</div><div>                type: patch</div><div>                options: {peer="br-mgmt--br-eth0"}</div><div>        Port "br-eth0--br-prv"</div><div>            Interface "br-eth0--br-prv"</div><div>                type: patch</div><div>                options: {peer="br-prv--br-eth0"}</div><div>        Port "br-eth0--br-fw-admin"</div><div>            trunks: [0]</div><div>            Interface "br-eth0--br-fw-admin"</div><div>                type: patch</div><div>                options: {peer="br-fw-admin--br-eth0"}</div><div>        Port "eth0"</div><div>            Interface "eth0"</div><div>    Bridge "br-eth2"</div><div>        Port "eth2"</div><div>            Interface "eth2"</div><div>        Port "br-eth2"</div><div>            Interface "br-eth2"</div><div>                type: internal</div><div>        Port "br-eth2--br-ex"</div><div>            trunks: [0]</div><div>            Interface "br-eth2--br-ex"</div><div>                type: patch</div><div>                options: {peer="br-ex--br-eth2"}</div><div>    Bridge "br-eth1"</div><div>        Port "eth1"</div><div>            Interface "eth1"</div><div>        Port "br-eth1"</div><div>            Interface "br-eth1"</div><div>                type: internal</div><div>    Bridge br-prv</div><div>        Port "br-prv--br-eth0"</div><div>            Interface "br-prv--br-eth0"</div><div>                type: patch</div><div>                options: {peer="br-eth0--br-prv"}</div><div>        Port "qg-de0a02f9-d2"</div><div>            Interface "qg-de0a02f9-d2"</div><div>                type: internal</div><div>        Port br-prv</div><div>            Interface br-prv</div><div>                type: internal</div><div>        Port phy-br-prv</div><div>            Interface phy-br-prv</div><div>    Bridge br-fw-admin</div><div>        Port br-fw-admin</div><div>            Interface br-fw-admin</div><div>                type: internal</div><div>        Port "br-fw-admin--br-eth0"</div><div>            trunks: [0]</div><div>            Interface "br-fw-admin--br-eth0"</div><div>                type: patch</div><div>                options: {peer="br-eth0--br-fw-admin"}</div><div>    ovs_version: "1.10.2"</div><div><br></div><div><br></div><div># ip link</div><div>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN</div><div>    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00</div><div>2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000</div><div>    link/ether 14:fe:b5:0f:b6:79 brd ff:ff:ff:ff:ff:ff</div><div>3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000</div><div>    link/ether 14:fe:b5:0f:b6:7b brd ff:ff:ff:ff:ff:ff</div><div>4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000</div><div>    link/ether 14:fe:b5:0f:b6:7d brd ff:ff:ff:ff:ff:ff</div><div>5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000</div><div>    link/ether 14:fe:b5:0f:b6:7f brd ff:ff:ff:ff:ff:ff</div><div>6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN</div><div>    link/ether 6a:26:28:63:48:52 brd ff:ff:ff:ff:ff:ff</div><div>7: br-eth3: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 14:fe:b5:0f:b6:7f brd ff:ff:ff:ff:ff:ff</div><div>8: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether a6:3d:66:56:16:40 brd ff:ff:ff:ff:ff:ff</div><div>9: br-eth1: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 14:fe:b5:0f:b6:7b brd ff:ff:ff:ff:ff:ff</div><div>10: br-int: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 8e:6a:fb:1f:18:47 brd ff:ff:ff:ff:ff:ff</div><div>14: br-fw-admin: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether e6:a1:ea:f3:0f:45 brd ff:ff:ff:ff:ff:ff</div><div>15: br-storage: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 42:a0:c7:5e:45:4d brd ff:ff:ff:ff:ff:ff</div><div>16: br-eth2: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 14:fe:b5:0f:b6:7d brd ff:ff:ff:ff:ff:ff</div><div>17: br-prv: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 16:23:fe:ec:eb:4f brd ff:ff:ff:ff:ff:ff</div><div>19: br-eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 14:fe:b5:0f:b6:79 brd ff:ff:ff:ff:ff:ff</div><div>20: br-mgmt: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether b6:9c:f9:60:a3:40 brd ff:ff:ff:ff:ff:ff</div><div>22: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN</div><div>    link/ether 92:c8:0e:96:13:db brd ff:ff:ff:ff:ff:ff</div><div>33: phy-br-prv: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000</div><div>    link/ether b2:29:ee:f4:86:16 brd ff:ff:ff:ff:ff:ff</div><div>34: int-br-prv: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000</div><div>    link/ether 6e:c4:d3:3e:c2:11 brd ff:ff:ff:ff:ff:ff</div><div>57: phy-br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000</div><div>    link/ether 96:8b:87:06:4b:e3 brd ff:ff:ff:ff:ff:ff</div><div>58: int-br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000</div><div>    link/ether fe:08:e6:ba:bf:d3 brd ff:ff:ff:ff:ff:ff</div><div><br></div><div><br></div><div># brctl show</div><div>bridge name     bridge id               STP enabled     interfaces</div><div>virbr0          8000.000000000000       yes</div><div><br></div><div>I guess I am missing some thing.</div><div><br></div><div>It looks like my issue is similar to this : <a href="https://ask.openstack.org/en/question/43594/vmware-neutron-bridging-problem/">https://ask.openstack.org/en/question/43594/vmware-neutron-bridging-problem/</a></div><div><br></div><div>I have configured br100 with VLAN ID 103 in vCenter. But I don't have br100 in my controller node. Not sure how to create it in my controller. </div><div><br></div><div><b>NOTE :</b> I have another openstack environment which I deployed as vCenter Environment using Fuel 5.1 and manually installed/configured Neutron + VMware dvSwitch ML2 Mechanism driver. It works fine with same vCenter.</div><div><br></div><div>Any help?</div><div><br></div><div>I happy to provide more info if required.</div><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><span style="color:rgb(57,51,24);line-height:20px;background-color:rgb(252,250,243)"><font size="4" face="georgia, serif">Thanks & Regards</font></span><div><div><span style="color:rgb(57,51,24);line-height:20px;background-color:rgb(252,250,243)"><font size="4" face="georgia, serif">E-Mail: <a href="mailto:thefossgeek@gmail.com" target="_blank">thefossgeek@gmail.com</a></font></span></div><div><span style="color:rgb(57,51,24);line-height:20px;background-color:rgb(252,250,243)"><font size="4" face="georgia, serif">IRC: neophy</font></span></div><div><span style="background-color:rgb(252,250,243)"><font color="#393318" size="4" face="georgia, serif"><span style="line-height:20px">Blog : <a href="http://lmohanphy.livejournal.com/" target="_blank">http://lmohanphy.livejournal.com/</a></span></font><br></span></div><div><br></div></div></div></div>
</div>