[Openstack-operators] [openstack][openstack-ansible] VMs not able to access external n/w

Amit Kumar ebiibe82 at gmail.com
Thu Mar 16 08:57:52 UTC 2017


Hi All,

I have deployed Openstack Newton release using Openstack-Ansible 14.0.8
with target hosts (Controller and Compute) having Ubuntu 16.04. I want to
ping/access my external lab n/w through the VMs (instances in Openstack
Compute node) but not able to do so. Here are my environment and respective
configurations.

Basically, I had tried to create the example test n/w configuration (
https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/app-config-test.html#test-environment-config)
with
some changes in /etc/network/interfaces file because each Compute and
Controller Node are having two NICs in my setup. As per the example
environment, I am also having 1 Compute and 1 Controller Node in my setup.
Each physical node (Controller and Compute) is having two NICs. eth0 of
each machine is connected to a switch for any kind of communication b/w
compute and controller node. eth1 of each machine is connected to my lab
n/w (192.168.255.XXX). My */etc/network/interfaces* file from Controller
and Compute node are attached with this e-mail. I have also attached
*openstack_user_config.yml* file which I am using.

As my requirement is to provide external connectivity to VMs running inside
Openstack environment. Could you guys please have a look at my network
interfaces files and openstack_user_config.yml file to see if there is
anything wrong in these configurations which is blocking me in providing
the external connectivity to my VMs. Few things which might be helpful in
analyzing these files:

   - My lab n/w (192.168.255.XXX) is not tagged n/w, it doesn't expect VLAN
   tagged packets. So, do I need to create flat external n/w? As you can
   notice in my openstack_user_config.yml file, I have commented flat n/w
   section from provider networks section. It was done because when I first
   created this setup, I was unable to launch VMs and on discussion on
   openstack-ansible channel and looking at logs, it was found that "eth12"
   interface was non-existent on Compute node and hence error was appearing in
   logs. So, folks in openstack-ansible channel suggested to comment the flat
   n/w configuration from openstack_user_config.yml and re-configure neutron
   and give it a try. After this, I was able to launch VMs. But now with the
   requirement to ping/access outside lab n/w, it seems that flat n/w
   configuration would be required again. Please also suggest what changes are
   required to be made so that flat n/w configuration is also successful.
   - One more thing, if you look at my network interfaces file, br-vlan is
   having eth0 as bridge-port but my eth0 is not connected to the outside
   world i.e. lab n/w. Shouldn't br-vlan be using bridge-port as eth1 instead
   of eth0 considering that br-vlan is providing connectivity to external n/w?
   Now this may seem strange but when I am deleting eth0 from br-vlan and
   adding eth1 on br-vlan of both controller and compute node, these hosts are
   not reachable or able to ping (after some time) any other lab machine on
   192.168.255.XXX n/w and vice-versa whereas with having eth0 in br-vlan,
   compute and controller are able to access lab machines and vice-versa.


Just to include more information, if required, I am adding the commands I
used to create n/ws.
*My VMs are on INTERNAL_NET1 created using following commands:*
 - openstack network create --provider-network-type vlan INTERNAL_NET1
 - openstack subnet create INTERNAL_SUBNET_1_1 --network INTERNAL_NET1
--subnet-range 192.168.2.0/24

*EXTERNAL_NET is created using following commands:*
 - neutron net-create --provider:physical_network=flat
--provider:network_type=flat --shared --router:external=true GATEWAY_NET
 - neutron subnet-create GATEWAY_NET 192.168.255.0/24 --name GATEWAY_SUBNET
--gateway=192.168.255.1 --allocation-pool
start=192.168.255.81,end=192.168.255.100

* Earlier I had tried --provider:physical_network=vlan
--provider:network_type=vlan and things didn't work because lab n/w seems
not expecting VLAN tagged packets. So, thinking of having flat n/w now.*

*Router is created and GW and interfaces are set using following commands:*
 - neutron router-create NEUTRON-ROUTER
 - neutron router-gateway-set NEUTRON-ROUTER GATEWAY_NET
 - neutron router-interface-add NEUTRON-ROUTER INTERNAL_SUBNET_1_1

Thanks.

Regards,
Amit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170316/65544b0c/attachment-0001.html>
-------------- next part --------------
# Controller Node (Infra).
# This illustrates the configuration of the first
# Infrastructure host and the IP addresses assigned should be adapted
# for implementation on the other hosts.
#
# After implementing this configuration, the host will need to be
# rebooted.

# Physical interface
auto eth0
iface eth0 inet manual

# Container/Host management VLAN interface
auto eth0.10
iface eth0.10 inet manual
    vlan-raw-device eth0

# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto eth0.30
iface eth0.30 inet manual
    vlan-raw-device eth0

# Storage network VLAN interface (optional)
auto eth0.20
iface eth0.20 inet manual
    vlan-raw-device eth0

# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0.10
    address 172.29.236.11
    netmask 255.255.252.0
    gateway 172.29.236.1
    dns-nameservers 8.8.8.8 8.8.4.4

# OpenStack Networking VXLAN (tunnel/overlay) bridge
#
# Only the COMPUTE and NETWORK nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
auto br-vxlan
iface br-vxlan inet manual
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0.30

# compute1 VXLAN (tunnel/overlay) bridge config
#auto br-vxlan
#iface br-vxlan inet static
#    bridge_stp off
#    bridge_waitport 0
#    bridge_fd 0
#    bridge_ports eth0.30
#    address 172.29.240.12
#    netmask 255.255.252.0

# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0

# Create veth pair, do not abort if already exists
    pre-up ip link add br-vlan-veth type veth peer name eth1 || true
# Set both ends UP
    pre-up ip link set br-vlan-veth up
    pre-up ip link set eth1 up
# Delete veth pair on DOWN
    post-down ip link del br-vlan-veth || true
    bridge_ports br-vlan-veth

# Storage bridge (optional)
#
# Only the COMPUTE and STORAGE nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
auto br-storage
iface br-storage inet manual
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0.20

# compute1 Storage bridge
#auto br-storage
#iface br-storage inet static
#    bridge_stp off
#    bridge_waitport 0
#    bridge_fd 0
#    bridge_ports eth0.20
#    address 172.29.244.12
#    netmask 255.255.252.0

# The eth1 external network interface
auto eth1
iface eth1 inet static
    address 192.168.255.45
    netmask 255.255.255.0
    gateway 192.168.255.1
    dns-nameservers 192.168.0.37 192.168.0.40

# The loopback network interface
auto lo
iface lo inet loopback

source /etc/network/interfaces.d/*
source /etc/network/interfaces.d/*.cfg
-------------- next part --------------
# Compute Node.
# This illustrates the configuration of the first
# Infrastructure host and the IP addresses assigned should be adapted
# for implementation on the other hosts.
#
# After implementing this configuration, the host will need to be
# rebooted.

# Physical interface
auto eth0
iface eth0 inet manual

# Container/Host management VLAN interface
auto eth0.10
iface eth0.10 inet manual
    vlan-raw-device eth0

# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto eth0.30
iface eth0.30 inet manual
    vlan-raw-device eth0

# Storage network VLAN interface (optional)
auto eth0.20
iface eth0.20 inet manual
    vlan-raw-device eth0

# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0.10
    address 172.29.236.12
    netmask 255.255.252.0
    gateway 172.29.236.1
    dns-nameservers 8.8.8.8 8.8.4.4

# OpenStack Networking VXLAN (tunnel/overlay) bridge
#
# Only the COMPUTE and NETWORK nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
#auto br-vxlan
#iface br-vxlan inet manual
#    bridge_stp off
#    bridge_waitport 0
#    bridge_fd 0
#    bridge_ports eth0.30

# compute1 VXLAN (tunnel/overlay) bridge config
auto br-vxlan
iface br-vxlan inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0.30
    address 172.29.240.12
    netmask 255.255.252.0

# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0

# Create veth pair, do not abort if already exists
    pre-up ip link add br-vlan-veth type veth peer name eth1 || true
# Set both ends UP
    pre-up ip link set br-vlan-veth up
    pre-up ip link set eth1 up
# Delete veth pair on DOWN
    post-down ip link del br-vlan-veth || true
    bridge_ports br-vlan-veth

# Storage bridge (optional)
#
# Only the COMPUTE and STORAGE nodes must have an IP address
# on this bridge. When used by infrastructure nodes, the
# IP addresses are assigned to containers which use this
# bridge.
#
#auto br-storage
#iface br-storage inet manual
#    bridge_stp off
#    bridge_waitport 0
#    bridge_fd 0
#    bridge_ports eth0.20

# compute1 Storage bridge
auto br-storage
iface br-storage inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports eth0.20
    address 172.29.244.12
    netmask 255.255.252.0

# The eth1 external network interface
auto eth1
iface eth1 inet static
    address 192.168.255.44
    netmask 255.255.255.0
    gateway 192.168.255.1
    dns-nameservers 192.168.0.37 192.168.0.40

# The loopback network interface
auto lo
iface lo inet loopback

source /etc/network/interfaces.d/*
source /etc/network/interfaces.d/*.cfg
-------------- next part --------------
A non-text attachment was scrubbed...
Name: openstack_user_config.yml
Type: application/octet-stream
Size: 3043 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170316/65544b0c/attachment-0001.obj>


More information about the OpenStack-operators mailing list