[Openstack] vm unable to get ip neutron with vmware nsx plugin

Gary Kotton gkotton at vmware.com
Fri Jul 29 07:39:11 UTC 2016


Hi,
Sorry this is missing the following:

In neutron/agent/linux/interface.py

    cfg.StrOpt('dvs_integration_bridge',
               default='br-dvs',
               help=_('Name of Open vSwitch bridge to use for DVS networks')),

The dvs_integration_bridge was not defined.

This is the name of the bridge on the OVS that will be connected to the vNIC that will will be connected to the DVS.

We need to figure out how to upstream this stuff.

Please note that with the simple DVS plugin you will not have security group or layer 3 support.

Thanks
Gary

From: Vaidyanath Manogaran <vaidyanath.m at gmail.com>
Date: Friday, July 29, 2016 at 10:35 AM
To: Gary Kotton <gkotton at vmware.com>
Cc: Scott Lowe <scott.lowe at scottlowe.org>, "openstack at lists.openstack.org" <openstack at lists.openstack.org>, "community at lists.openstack.org" <community at lists.openstack.org>
Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin

Hi Gary,
After I patched your code I dont see the DHCP server starting now.
Am I missing something here?

2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent [-] Unable to disable dhcp for f9652dac-7f9e-4910-8d8f-38e84a9fa7c7.
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/dhcp/agent.py", line 112, in call_driver
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent     getattr(driver, action)(**action_kwargs)
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 226, in disable
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent     self._destroy_namespace_and_port()
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 231, in _destroy_namespace_and_port
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent     self.device_manager.destroy(self.network, self.interface_name)
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 1311, in destroy
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent     device_name, bridge=self.conf.dvs_integration_bridge,
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2183, in __getattr__
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent     raise NoSuchOptError(name)
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent NoSuchOptError: no such option in group DEFAULT: dvs_integration_bridge
2016-07-29 19:19:46.101 12719 ERROR neutron.agent.dhcp.agent
2016-07-29 19:19:46.102 12719 INFO neutron.agent.dhcp.agent [-] Starting network e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04 dhcp configuration
2016-07-29 19:19:46.103 12719 DEBUG neutron.agent.dhcp.agent [-] Calling driver for network: e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04 action: enable call_driver /usr/lib/python2.7/dist-packages/neutron/agent/dhcp/agent.py:103
2016-07-29 19:19:46.104 12719 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/dhcp/e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04/pid get_value_from_file /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:225
2016-07-29 19:19:46.104 12719 DEBUG neutron.agent.linux.dhcp [-] DHCP port dhcpd3377d3c-a0d1-5d71-9947-f17125c357bb-e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04 on network e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04 does not yet exist. Checking for a reserved port. _setup_reserved_dhcp_port /usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py:1123
2016-07-29 19:19:46.105 12719 DEBUG neutron.agent.linux.dhcp [-] DHCP port dhcpd3377d3c-a0d1-5d71-9947-f17125c357bb-e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04 on network e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04 does not yet exist. Creating new one. _setup_new_dhcp_port /usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py:1144
2016-07-29 19:19:46.106 12719 DEBUG oslo_messaging._drivers.amqpdriver [-] CALL msg_id: 7d026a3916934e58a4a346dcc8d30491 exchange 'neutron' topic 'q-plugin' _send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:454
2016-07-29 19:19:46.359 12719 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 7d026a3916934e58a4a346dcc8d30491 __call__ /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:302
2016-07-29 19:19:46.361 12719 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qdhcp-e6ec81cb-fc16-47a6-8bf0-d29a1a3bfa04', 'ip', 'link', 'set', 'tapf8af8441-cc', 'up'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:84


Regards,
Vaidyanath

On Thu, Jul 28, 2016 at 11:49 PM, Vaidyanath Manogaran <vaidyanath.m at gmail.com<mailto:vaidyanath.m at gmail.com>> wrote:
Thanks Gary for the clarification.

But I still dont understand one thing. when you say DHCP agent is configured with OVS agent.
you mean if we use this code we dont need ovs agent is what you mean?

I have setup the dhcp server with linux dnsmasq which gets triggered by the dhcp agent.
here is the entry of my dhcp_agent.ini

ovs_integration_bridge = br-dvs
enable_metadata_network = True
enable_isolated_metadata = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
debug = True
use_namespaces=True
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
verbose = True
ovs_use_veth = False
dhcp_override_mac = 00:50:56:b4:41:e1



On Thu, Jul 28, 2016 at 11:24 PM, Gary Kotton <gkotton at vmware.com<mailto:gkotton at vmware.com>> wrote:
Ok,
I know the issue – the problem is that the entries in the OVS are not being configured with the VLAN tag.
The reason for this is that the plugin does not have an agent that configures them. You can patch the DHCP agent with the following code:

In neutron/agent/linux/dhcp.py:

    def setup(self, network):
        """Create and initialize a device for network's DHCP on this host."""
        port = self.setup_dhcp_port(network)
        self._update_dhcp_port(network, port)
        interface_name = self.get_interface_name(network, port)

        if ip_lib.ensure_device_is_ready(interface_name,
                                         namespace=network.namespace):
            LOG.debug('Reusing existing device: %s.', interface_name)
        else:
            try:
                if (cfg.CONF.core_plugin and
                    cfg.CONF.core_plugin.endswith('NsxDvsPlugin')):
                    mac_address = port.mac_address
                    self.driver.plug(network.id,
                                     port.id,
                                     interface_name,
                                     mac_address,
                                     namespace=network.namespace,
                                     mtu=network.get('mtu'),
                                     bridge=self.conf.dvs_integration_bridge)
                    vlan_tag = getattr(network, 'provider:segmentation_id',
                                       None)
                    # Treat vlans
                    if vlan_tag != 0:
                        br_dvs = ovs_lib.OVSBridge(
                            self.conf.dvs_integration_bridge)
                        # When ovs_use_veth is set to True, the DEV_NAME_PREFIX
                        # will be changed from 'tap' to 'ns-' in
                        # OVSInterfaceDriver
                        dvs_port_name = interface_name.replace('ns-', 'tap')
                        br_dvs.set_db_attribute(
                            "Port", dvs_port_name, "tag", vlan_tag)
                else:
                    self.driver.plug(network.id,
                                     port.id,
                                     interface_name,
                                     port.mac_address,
                                     namespace=network.namespace,
                                     mtu=network.get('mtu'))


    def destroy(self, network, device_name):
        """Destroy the device used for the network's DHCP on this host."""
        if device_name:
            if (cfg.CONF.core_plugin and
                cfg.CONF.core_plugin.endswith('NsxDvsPlugin')):
                self.driver.unplug(
                    device_name, bridge=self.conf.dvs_integration_bridge,
                   namespace=network.namespace)
            else:
                self.driver.unplug(device_name, namespace=network.namespace)
            # VIO - end
        else:
            LOG.debug('No interface exists for network %s', network.id)

        self.plugin.release_dhcp_port(network.id,
                                      self.get_device_id(network))

We still need to figure out how to upstream this code. The issue is that the DHCP agent is configured by the OVS agent and that is not needed….
Thanks
Gary

From: Vaidyanath Manogaran <vaidyanath.m at gmail.com<mailto:vaidyanath.m at gmail.com>>
Date: Thursday, July 28, 2016 at 8:33 PM

To: Gary Kotton <gkotton at vmware.com<mailto:gkotton at vmware.com>>
Cc: Scott Lowe <scott.lowe at scottlowe.org<mailto:scott.lowe at scottlowe.org>>, "openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>" <openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>>, "community at lists.openstack.org<mailto:community at lists.openstack.org>" <community at lists.openstack.org<mailto:community at lists.openstack.org>>
Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin

The DHCP Agent is part of the controller node.
The Agent is connected to DVS. what I mean is, when i create a network in neutron the Portgroup is getting created successfully.
I just need to make sure how my MAC is getting assigned.

Also i see that the vlan tag ID is not getting mapped to the Tap device in ovs.

root at controller:~# neutron agent-list
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| id                                   | agent_type     | host       | alive | admin_state_up | binary                 |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| 5555dbd8-14d0-4a47-83bd-890737bcfe08 | DHCP agent     | controller | :-)   | True           | neutron-dhcp-agent     |
| f183a3b6-b065-4b90-b5b7-b3d819c30f5b | Metadata agent | controller | :-)   | True           | neutron-metadata-agent |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
root at controller:~# vi /etc/neutron/neutron.conf
root at controller:~# ovs-vsctl show
d516b5b1-db3f-4acd-856c-10d530c58c23
    Bridge br-dvs
        Port "eth1"
            Interface "eth1"
        Port br-dvs
            Interface br-dvs
                type: internal
        Port "tap707eb11b-4b"
            Interface "tap707eb11b-4b"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.5.0"
root at controller:~#






On Thu, Jul 28, 2016 at 10:57 PM, Gary Kotton <gkotton at vmware.com<mailto:gkotton at vmware.com>> wrote:
Ok, thanks.
Where is the DHCP agent running?
You need to make sure that the agent is connected to the DVS that you are using in Nova. In addition to this you need to make sure that it can use MAC’s that are allocated by OpenStack.


From: Vaidyanath Manogaran <vaidyanath.m at gmail.com<mailto:vaidyanath.m at gmail.com>>
Date: Thursday, July 28, 2016 at 8:25 PM
To: Gary Kotton <gkotton at vmware.com<mailto:gkotton at vmware.com>>
Cc: Scott Lowe <scott.lowe at scottlowe.org<mailto:scott.lowe at scottlowe.org>>, "openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>" <openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>>, "community at lists.openstack.org<mailto:community at lists.openstack.org>" <community at lists.openstack.org<mailto:community at lists.openstack.org>>

Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin

its just simple DVS.

core_plugin = vmware_nsx.plugin.NsxDvsPlugin


On Thu, Jul 28, 2016 at 10:54 PM, Gary Kotton <gkotton at vmware.com<mailto:gkotton at vmware.com>> wrote:
Hi,
Which backend NSX version are you using? Is this NSX|V, NSX|MH or simple DVS?
Thanks
Gary

From: Vaidyanath Manogaran <vaidyanath.m at gmail.com<mailto:vaidyanath.m at gmail.com>>
Date: Thursday, July 28, 2016 at 8:04 PM
To: Scott Lowe <scott.lowe at scottlowe.org<mailto:scott.lowe at scottlowe.org>>
Cc: "openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>" <openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>>, "community at lists.openstack.org<mailto:community at lists.openstack.org>" <community at lists.openstack.org<mailto:community at lists.openstack.org>>
Subject: Re: [Openstack] vm unable to get ip neutron with vmware nsx plugin

Hi Scott,
Thank you for the reply. my replies inline[MV]

On Thu, Jul 28, 2016 at 8:29 PM, Scott Lowe <scott.lowe at scottlowe.org<mailto:scott.lowe at scottlowe.org>> wrote:
Please see my responses inline, prefixed by [SL].


On Jul 28, 2016, at 2:43 AM, Vaidyanath Manogaran <vaidyanath.m at gmail.com<mailto:vaidyanath.m at gmail.com>> wrote:
>
> 1- Controller node Services - keystone, glance, neutron, nova neutron plugins used - vmware-nsx - https://github.com/openstack/vmware-nsx/<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_vmware-2Dnsx_&d=CwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=P9nqTtbpb0cd3RCGTLZ2FVIXDztbr46L6s8pM3ulswk&s=sHANOvMVbB4vailvn1AO1bxWfs6epyOTAAcuDkWKSEE&e=> neutron agents - openvswitch agent 2- compute node Services - nova-compute


[SL] May I ask what version of NSX you're running?
[MV] I have installed it from source picked up from github stable/mitaka - https://github.com/openstack/vmware-nsx/tree/stable/mitaka<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_vmware-2Dnsx_tree_stable_mitaka&d=CwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=P9nqTtbpb0cd3RCGTLZ2FVIXDztbr46L6s8pM3ulswk&s=1y5-WQ8XAnjDMxyPqes3I2h6E9TfJzwPlTu70EJpTfY&e=>

> I have all the services up and running. but when i provision the vm the vm is not assigning the IP address which is offered from DHCP server


[SL] NSX doesn't currently handle DHCP on its own, so you'll need the Neutron DHCP agent running somewhere. Wherever it's running will need to have OVS installed and be registered into NSX as a "hypervisor" so that the DHCP agent can be plumbed into the overlay networks.

One common arrangement is to build a Neutron "network node" that is running the DHCP agent and metadata agent, and register that into NSX.
 [MV] I have setup only controller with neutron metadata and neutron dhcp

root at controller:~# neutron agent-list
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| id                                   | agent_type     | host       | alive | admin_state_up | binary                 |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
| 5555dbd8-14d0-4a47-83bd-890737bcfe08 | DHCP agent     | controller | :-)   | True           | neutron-dhcp-agent     |
| f183a3b6-b065-4b90-b5b7-b3d819c30f5b | Metadata agent | controller | :-)   | True           | neutron-metadata-agent |
+--------------------------------------+----------------+------------+-------+----------------+------------------------+
root at controller:~#



> here are the config details:-
>
> root at controller:~# neutron net-show test +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2016-07-28T13:35:22 | | description | | | id | be2178a3-a268-47f4-809e-8e0024c6f054 | | name | test | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | dvs | | provider:segmentation_id | 110 | | router:external | False | | shared | True | | status | ACTIVE | | subnets | 5009ec57-4ca7-4e2b-962e-549e6bbee408 | | tags | | | tenant_id | ce581005def94bb1947eac9ac15f15ea | | updated_at | 2016-07-28T13:35:22 | +---------------------------+--------------------------------------+
>
> root at controller:~# neutron subnet-show testsubnet +-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | {"start": "192.168.18.246", "end": "192.168.18.248"} | | cidr | 192.168.18.0/24<https://urldefense.proofpoint.com/v2/url?u=http-3A__192.168.18.0_24&d=CwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=P9nqTtbpb0cd3RCGTLZ2FVIXDztbr46L6s8pM3ulswk&s=nky7Szid45D670NmpZ_3U5oQEt2c9uGU6boDOAH5YdY&e=> | | created_at | 2016-07-28T14:56:54 | | description | | | dns_nameservers | 192.168.13.12<tel:192.168.13.12> | | enable_dhcp | True | | gateway_ip | 192.168.18.1 | | host_routes | | | id | 5009ec57-4ca7-4e2b-962e-549e6bbee408 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | testsubnet | | network_id | be2178a3-a268-47f4-809e-8e0024c6f054 | | subnetpool_id | | | tenant_id | ce581005def94bb1947eac9ac15f15ea | | updated_at | 2016-07-28T14:56:54 | +-------------------+------------------------------------------------------+
>
> root at controller:~# ovs-vsctl show d516b5b1-db3f-4acd-856c-10d530c58c23 Bridge br-dvs Port br-dvs Interface br-dvs type: internal Port "eth1" Interface "eth1" Bridge br-int Port br-int Interface br-int type: internal Port "tap91d8accd-6d" Interface "tap91d8accd-6d" type: internal ovs_version: "2.5.0"
>
> root at controller:~# ip netns qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054
>
> root at controller:~# ip netns exec qdhcp-be2178a3-a268-47f4-809e-8e0024c6f054 ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>
> tap91d8accd-6d Link encap:Ethernet HWaddr fa:16:3e:7f:5e:03 inet addr:192.168.18.246 Bcast:192.168.18.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe7f:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)
>
> root at controller:~# ping 192.168.18.246 PING 192.168.18.246 (192.168.18.246) 56(84) bytes of data. ^C --- 192.168.18.246 ping statistics --- 20 packets transmitted, 0 received, 100% packet loss, time 18999ms
>
> I dont have any agents running. because vmware_nsx should be taking care of the communication with openvswitch.
>
> Commandline: apt install openvswitch-switch Install: openvswitch-switch:amd64 (2.5.0-0ubuntu1~cloud0), openvswitch-common:amd64 (2.5.0-0ubuntu1~cloud0, automatic)
>

[SL] You need to ensure you are using the version of OVS that is matched against your version of NSX. At this time, I don't believe it's OVS 2.5.0 (as noted in your command-line installation of OVS).
how to I ensure the supported version is installed. is there a support matrix? if so could you please share it?
--
Scott



--
Regards,

Vaidyanath
+91-9483465528<tel:%2B91-9483465528>(M)



--
Regards,

Vaidyanath
+91-9483465528<tel:%2B91-9483465528>(M)



--
Regards,

Vaidyanath
+91-9483465528<tel:%2B91-9483465528>(M)



--
Regards,

Vaidyanath
+91-9483465528<tel:%2B91-9483465528>(M)



--
Regards,

Vaidyanath
+91-9483465528(M)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160729/a06c53dd/attachment.html>


More information about the Openstack mailing list