[Openstack] OpenVSwitch ports won't come up / Neutron configuration problems
Uwe Sauter
uwe.sauter.de at gmail.com
Thu Mar 5 10:26:46 UTC 2015
Hi all,
I'm trying to setup Neutron (Juno) on CentOS 7 from RDO but keep failing. So here are some items I'd like to get your opinion on:
(A graphical overview created with plotnetcfg (https://github.com/jbenc/plotnetcfg) and gimp'ed together is attached.)
* Hostnames
All my nodes have a native hostname that describes the location in my rack. E.g. os484001 means this is an
OpenStack member in rack 48, height unit 40, first node in this height unit. They then get additional hostnames
describing their function like "neutron-controller", "cinder01", etc., but only on the DNS server.
Is this a problem? Do nodes need to be able to resolve all their hostnames from /etc/hosts? Or should services
get configured that native hostname (output from "hostname -s" or even socket.getfqdn() in Python)?
* Static network interface configuration
I have configured all network interfaces using /etc/sysconfig/network-scripts/ifcfg-<name>. Those that should be
part of OVS are configured like:
/etc/sysconfig/network-scripts/ifcfg-enp6s0
NAME=enp6s0
HWADDR=00:21:5E:75:70:FA
TYPE=OVSPort
DEVICETYPE=ovs
ONBOOT=yes
BOOTPROTO=none
OVS_BRIDGE=br_tun
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=no
IPV6_DEFROUTE=no
IPV6_FAILURE_FATAL=no
IPV6_PEERDNS=no
IPV6_PEERROUTES=no
NM_CONTROLLED=no
NOZEROCONF=yes
Additionally I have configured all needed OVS bridges on compute and network nodes like:
(bridge names contain underscores instead of dashes as the management scripts don't allow
dashes in names)
/etc/sysconfig/network-scripts/ifcfg-br_tun
DEVICE=br_tun
ONBOOT=yes
BOOTPROTO=static
DEVICETYPE=ovs
TYPE=OVSBridge
NOZEROCONF=yes
Is this a correct way to configure OVS or do I have to rely on OpenStack / OVS database to restore the configuration after
reboot?
* With above configuration I got the problem that both hardware interfaces and some OVS bridges are not brought up after reboot.
What am I doing wrong?
* I want to setup Neutron so that tenant networks are separated by VLANs. Unfortunately the documentation is sparse to none about
this so I get the feeling that GRE tunneling is the most supported way to separate networks.
In this VLAN scenario, which OVS bridge needs to be connected to the physical interface (and therefore 802.1q enabled switch)?
br-int or br-tun (which in my case would be br_enp6s0 on the compute nodes and br_tun on the network node)?
What else do I need to configure besides:
/etc/neutron/plugin.ini
[ml2]
mechanism_drivers = openvswitch
tenant_network_types = local,vlan
type_drivers = local,vlan
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = phys-tenant:1501:1510
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
enable_ipset = True
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
bridge_mappings = phys-external:br_ext,phys-tenant:br_tun
I'd appreciate every hint.
Regards,
Uwe
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Neutron_setup.png
Type: image/png
Size: 271296 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150305/6456b114/attachment.png>
More information about the Openstack
mailing list