[Openstack] vm unable to get ip neutron with vmware nsx plugin

Scott Lowe scott.lowe at scottlowe.org
Thu Jul 28 17:48:00 UTC 2016


Please see my responses inline, prefixed by [SL].


> On Jul 28, 2016, at 11:25 AM, Vaidyanath Manogaran <vaidyanath.m at gmail.com> wrote:
> 
> its just simple DVS.
> 
> core_plugin = vmware_nsx.plugin.NsxDvsPlugin


[SL] Ah. Gary can confirm, but if I'm not mistaken the DVS plugin only supports the vSphere Distributed Switch. It won't work with OVS.

Further, the Github repo for NSX only contains the NSX Neutron plugins, not NSX itself.


> On Thu, Jul 28, 2016 at 10:54 PM, Gary Kotton <gkotton at vmware.com> wrote:
> Hi,
> 
> Which backend NSX version are you using? Is this NSX|V, NSX|MH or simple DVS?
> 
> Thanks
> 
> Gary
> 
>  
> 
> From: Vaidyanath Manogaran <vaidyanath.m at gmail.com>
> Date: Thursday, July 28, 2016 at 8:04 PM
> To: Scott Lowe <scott.lowe at scottlowe.org>
> Cc: "openstack at lists.openstack.org" <openstack at lists.openstack.org>, "community at lists.openstack.org" <community at lists.openstack.org>
> Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin
> 
>  
> 
> Hi Scott,
> 
> Thank you for the reply. my replies inline[MV]
> 
>  
> 
> On Thu, Jul 28, 2016 at 8:29 PM, Scott Lowe <scott.lowe at scottlowe.org> wrote:
> 
> Please see my responses inline, prefixed by [SL].
> 
> 
> On Jul 28, 2016, at 2:43 AM, Vaidyanath Manogaran <vaidyanath.m at gmail.com> wrote:
> >
> > 1- Controller node Services - keystone, glance, neutron, nova neutron plugins used - vmware-nsx - https://github.com/openstack/vmware-nsx/ neutron agents - openvswitch agent 2- compute node Services - nova-compute
> 
> 
> [SL] May I ask what version of NSX you're running?
> [MV] I have installed it from source picked up from github stable/mitaka - https://github.com/openstack/vmware-nsx/tree/stable/mitaka
> 
> > I have all the services up and running. but when i provision the vm the vm is not assigning the IP address which is offered from DHCP server
> 
> 
> [SL] NSX doesn't currently handle DHCP on its own, so you'll need the Neutron DHCP agent running somewhere. Wherever it's running will need to have OVS installed and be registered into NSX as a "hypervisor" so that the DHCP agent can be plumbed into the overlay networks.
> 
> One common arrangement is to build a Neutron "network node" that is running the DHCP agent and metadata agent, and register that into NSX. 
> 
>  [MV] I have setup only controller with neutron metadata and neutron dhcp
> 
>  
> 
> root at controller:~# neutron agent-list
> 
> +--------------------------------------+----------------+------------+-------+----------------+------------------------+
> 
> | id                                   | agent_type     | host       | alive | admin_state_up | binary                 |
> 
> +--------------------------------------+----------------+------------+-------+----------------+------------------------+
> 
> | 5555dbd8-14d0-4a47-83bd-890737bcfe08 | DHCP agent     | controller | :-)   | True           | neutron-dhcp-agent     |
> 
> | f183a3b6-b065-4b90-b5b7-b3d819c30f5b | Metadata agent | controller | :-)   | True           | neutron-metadata-agent |
> 
> +--------------------------------------+----------------+------------+-------+----------------+------------------------+
> 
> root at controller:~#
> 
>  
> 
>  
> 
> 
> > here are the config details:-
> >
> > root at controller:~# neutron net-show test +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2016-07-28T13:35:22 | | description | | | id | be2178a3-a268-47f4-809e-8e0024c6f054 | | name | test | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | dvs | | provider:segmentation_id | 110 | | router:external | False | | shared | True | | status | ACTIVE | | subnets | 5009ec57-4ca7-4e2b-962e-549e6bbee408 | | tags | | | tenant_id | ce581005def94bb1947eac9ac15f15ea | | updated_at | 2016-07-28T13:35:22 | +---------------------------+--------------------------------------+
> >
> > root at controller:~# neutron subnet-show testsubnet +-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | {"start": "192.168.18.246", "end": "192.168.18.248"} | | cidr | 192.168.18.0/24 | | created_at | 2016-07-28T14:56:54 | | description | | | dns_nameservers | 192.168.13.12 | | enable_dhcp | True | | gateway_ip | 192.168.18.1 | | host_routes | | | id | 5009ec57-4ca7-4e2b-962e-549e6bbee408 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | testsubnet | | network_id | be2178a3-a268-47f4-809e-8e0024c6f054 | | subnetpool_id | | | tenant_id | ce581005def94bb1947eac9ac15f15ea | | updated_at | 2016-07-28T14:56:54 | +-------------------+------------------------------------------------------+
> >
> > root at controller:~# ovs-vsctl show d516b5b1-db3f-4acd-856c-10d530c58c23 Bridge br-dvs Port br-dvs Interface br-dvs type: internal Port "eth1" Interface "eth1" Bridge br-int Port br-int Interface br-int type: internal Port "tap91d8accd-6d" Interface "tap91d8accd-6d" type: internal ovs_version: "2.5.0"
> >
> > root at controller:~# ip netns qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054
> >
> > root at controller:~# ip netns exec qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054 ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
> >
> > tap91d8accd-6d Link encap:Ethernet HWaddr fa:16:3e:7f:5e:03 inet addr:192.168.18.246 Bcast:192.168.18.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe7f:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
> >
> > root at controller:~# ping 192.168.18.246 PING 192.168.18.246 (192.168.18.246) 56(84) bytes of data. ^C --- 192.168.18.246 ping statistics --- 20 packets transmitted, 0 received, 100% packet loss, time 18999ms
> >
> > I dont have any agents running. because vmware_nsx should be taking care of the communication with openvswitch.
> >
> > Commandline: apt install openvswitch-switch Install: openvswitch-switch:amd64 (2.5.0-0ubuntu1~cloud0), openvswitch-common:amd64 (2.5.0-0ubuntu1~cloud0, automatic)
> >
> 
> [SL] You need to ensure you are using the version of OVS that is matched against your version of NSX. At this time, I don't believe it's OVS 2.5.0 (as noted in your command-line installation of OVS).
> 
> how to I ensure the supported version is installed. is there a support matrix? if so could you please share it? 
> 
> --
> Scott
> 
> 
> 
> 
>  
> 
> --
> 
> Regards,
> 
> Vaidyanath
> +91-9483465528(M)
> 
> 
> 
> 
> -- 
> Regards,
> 
> Vaidyanath
> +91-9483465528(M)





More information about the Openstack mailing list