[Openstack] Issues Understanding Neutron Networking Layout
d.lake at surrey.ac.uk
d.lake at surrey.ac.uk
Thu Dec 21 10:23:00 UTC 2017
Hello OpenStack and NetVirt
I'd really appreciate some guidance here because I am a confused as to how I should be building this.
To explain what I want to do:
* Controller in one location with a single IP connection (1 GE)
* Compute node in a remote location with 4 10GE connections for public networking and 1GE IP connection to the Controller
The VMs on the Compute node will each have 2 10GE connections as they will be forwarding data.
I have successfully deployed a Pike system with ODL with the previously attached local.conf.
However, there seem to be a number of areas of confusion for me:
* In the local.conf for the Control node, why do I need to declare ODL_PROVIDER_MAPPINGS ? There will be no connection to data-forwarding networks at the Controller location. However, if I leave these out (I declare physnet1-physnet4 and bridge them to disconnected ports on the Control node em1-em4) when I try to build the VXLAN network with Neutron, I get a failure in that the Provider Network does not exist.
* If I declare the physnet1-physnet4 on the Compute node, whilst this works, when I try to add the local networks using Neutron, it appears to be using information relevant to the Control node, not the Compute node.
* The only way that I have found I can make any of this work is to essentially build a Layer 2 network layout on the Control node which matches that of the Compute node and is connected to it. This is problematic in my layout because I don't have layer 2 connectivity between the locations with the Control and the Compute nodes.
* Even when that happens, the DHCP server appears to be on the Control node, not the Compute node. That should be fine if the VXLAN tunnel is up, but I've found that the VXLAN flow is only built on the physical layer 2 link between the systems and I don't have any layer 2 links on the Control node.
Also, in local.conf, I seem to have to declare PUBLIC_BRIDGE. I have no idea what this does or how the system would react to having four PUBLIC_BRIDGE entries:
PUBLIC_BRIDGE=br-physnet1
#PUBLIC_PHYSICAL_NETWORK=physnet1,physnet2,physnet3,physnet4
PUBLIC_PHYSICAL_NETWORK=physnet1
#ML2_VLAN_RANGES=physnet1,physnet2,physnet3,physnet4
ML2_VLAN_RANGES=physnet1
#ODL_PROVIDER_MAPPINGS=physnet1:br-physnet1,physnet2:br-physnet2,physnet3:br-physnet3,physnet4:br-physnet4
ODL_PROVIDER_MAPPINGS=physnet1:br-physnet1
The commented out lines are what I eventually want but only on the Compute node. br-physnetx bridges to physical interface emx respectively. The Control node really needs no VM side networking at all.
I'm probably getting very confused here - what I want is the ability at the remote location to have four 10GE connections to a local Layer2 network and simply have 2 of those appear on a Virtual Machine instantiated at that remove location. The physical network ports should be directly bridged to the VM.
I then want the third network to boot-up as normal from OpenStack and obtain an internal IP address so that I can ssh to the machine.
I'd really appreciate some assistance because I am at the point of not understanding the options here or how to configure them into the DevStack builder.
Thanks
David
From: Lake D Mr (PG/R - Elec Electronic Eng)
Sent: 19 December 2017 22:27
To: 'Trinath Somanchi' <trinath.somanchi at nxp.com>; 'openstack at lists.openstack.org' <openstack at lists.openstack.org>
Subject: RE: Issues Understanding Neutron Networking Layout
I tried sending the log file to the list but it was rejected as too large.
So I've zipped the log file.
David
From: Lake D Mr (PG/R - Elec Electronic Eng)
Sent: 19 December 2017 16:02
To: 'Trinath Somanchi' <trinath.somanchi at nxp.com<mailto:trinath.somanchi at nxp.com>>; openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Subject: RE: Issues Understanding Neutron Networking Layout
OK - the log file from trying to start a new instance on the Compute server (intel-test2) is attached. Control server is called "23-210"
Line 2017 doesn't look right to me:
Dec 19 15:26:36 23-210 neutron-server[169175]: DEBUG neutron.plugins.ml2.managers [req-e60dabcc-4c78-4ad2-8eed-11c70e5433dc req-ca0a8968-16cf-4548-9654-a248a28268ca service neutron] Attempting to bind port f1702fcc-49e0-4b98-8304-860ee2c436aa on host intel-test2 at level 0 using segments [{'network_id': '3862495f-43d2-4dbc-a67b-ade91d97e141', 'segmentation_id': 1500, 'physical_network': None, 'id': '4d45af9a-72ee-4fef-9042-a5e7debff29b', 'network_type': u'vxlan'}] {{(pid=169276) _bind_port_level /opt/stack/neutron/neutron/plugins/ml2/managers.py:765}}
Segmentation ID 1500 is the VXLAN I want to use but it says "Physical Network:None"
Is this correct?
David
From: Trinath Somanchi [mailto:trinath.somanchi at nxp.com]
Sent: 19 December 2017 10:30
To: Lake D Mr (PG/R - Elec Electronic Eng) <d.lake at surrey.ac.uk<mailto:d.lake at surrey.ac.uk>>; openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Subject: RE: Issues Understanding Neutron Networking Layout
Check /opt/stack/logs ?
/
Trinath Somanchi | HSDC | NXP INDIA
From: d.lake at surrey.ac.uk<mailto:d.lake at surrey.ac.uk> [mailto:d.lake at surrey.ac.uk]
Sent: Tuesday, December 19, 2017 3:59 PM
To: Trinath Somanchi <trinath.somanchi at nxp.com<mailto:trinath.somanchi at nxp.com>>; openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Subject: RE: Issues Understanding Neutron Networking Layout
Can you tell me where to look? None of the usual "screen" logs are there with the latest DevStack and the system doesn't seem to be populating any of the /var/log locations etiher.
David
From: Trinath Somanchi [mailto:trinath.somanchi at nxp.com]
Sent: 19 December 2017 08:51
To: Lake D Mr (PG/R - Elec Electronic Eng) <d.lake at surrey.ac.uk<mailto:d.lake at surrey.ac.uk>>; openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Subject: RE: Issues Understanding Neutron Networking Layout
Can you check neutron server/agent logs for exact error to debug ?
/
Trinath Somanchi | HSDC | NXP INDIA
From: d.lake at surrey.ac.uk<mailto:d.lake at surrey.ac.uk> [mailto:d.lake at surrey.ac.uk]
Sent: Tuesday, December 19, 2017 2:02 PM
To: openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>
Subject: [Openstack] Issues Understanding Neutron Networking Layout
Hello
I'm trying to create a Pike system with Carbon ODL integration using a single Controller node and a single Compute node.
The Controller Node has a single 1GE NIC to the management network. It will not run any compute or network services.
The Compute Node has a single 1GE NIC to the management network and 4 x 10GE NICS for public network access. I will be creating 4 VXNets, 4 routers, 4 pools of public floating IP addresses.
I have built the machines using DevStack (configs attached).
It seems that even though the Controller node will have no network/compute functions, I still need to declare the ODL and networking parts in the local.conf on the controller. But this implies that the Controller also has to have access to the 10GE network which I don't want or need.
I have created 4 OVS bridges on both the Controller and the Compute nodes but the corresponding NICs on the Controller are not connected anywhere. The bridges are br-physnet1 to br-physnet4 and these link to em1 to em4 respectively. Only the em1 - em4 on the Compute node are actually active.
I then create the VXLAN, router and DHCP agent.
However, when I try to start an instance on the Compute node post DevStack installation, I hit an issue where the GUI tells me it has been "unable to allocate network."
I see no attempt to create a VXLAN between the Controller and the Compute node. IPTables is open and a sniff on port 4789 reveals no traffic back and forth (on either side).
Does the Controller need to be physically connected to the same public facing networks as the Compute node?
Alternatively, how can I get all the Networking functions to run in the Compute node so that the Controller is just a controller?
Thanks
David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20171221/64c3012b/attachment.html>
More information about the Openstack
mailing list