[Openstack] [neutron] issues with internal GRE networks under virtual box

George Shuklin george.shuklin at gmail.com
Thu Nov 21 14:59:15 UTC 2013


Good day.

I've successfully install and configure baremetall copy of openstack (1 
compute, 1 controller and 1 neutron server).

But then I was asked to repeat configuration under virtual box (with 
software qemu as hypervisor). It working mostly, but I completely stuck 
with neutron internal networks.

My configuration:

Three servers:
controller
compute1
neutron

eth2 on all servers is 'vm data network', designated for the GRE 
traffic. It is a 'internal network' in virtual box settings (all three 
VMs running on same virtualbox host).

Pings successfully fly between virtual machines on compute1, but 
according to tcpdump did not getting to eth2 (and did not getting to 
neutron node).

My settings:

interfaces:

iface eth2 inet static
#vm-data (neutron) network
         address 192.168.22.11
         netmask 255.255.255.0

OVS plugin configuration for neutron:

[DEFAULT]
debug=True
verbose=True
rpc_backend=neutron.openstack.common.rpc.impl_kombu
rabbit_host = controller
state_path = /var/lib/neutron
lock_path = $state_path/lock
core_plugin = 
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
notification_driver = neutron.openstack.common.notifier.rpc_notifier
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[database]
connection = mysql://neutron:****@controller/neutron
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[keystone_authtoken]
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = *****

When two virtual machines (connected to same internal network) are 
running, ovs-vsctl show displays:

ovs-vsctl show
37f693d4-b1c5-46fd-9278-fe99153e0aa8
     Bridge br-tun
         Port "gre-1"
             Interface "gre-1"
                 type: gre
                 options: {in_key=flow, local_ip="192.168.22.11", 
out_key=flow, remote_ip="192.168.22.2"}
         Port br-tun
             Interface br-tun
                 type: internal
         Port patch-int
             Interface patch-int
                 type: patch
                 options: {peer=patch-tun}
     Bridge br-ex
         Port phy-br-ex
             Interface phy-br-ex
         Port "eth0"
             Interface "eth0"
         Port br-ex
             Interface br-ex
                 type: internal
     Bridge br-int
         Port "qvo886390bf-84"
             tag: 1
             Interface "qvo886390bf-84"
         Port patch-tun
             Interface patch-tun
                 type: patch
                 options: {peer=patch-int}
         Port br-int
             Interface br-int
                 type: internal
         Port "qvo559b9c00-20"
             tag: 1
             Interface "qvo559b9c00-20"
         Port int-br-ex
             Interface int-br-ex
     ovs_version: "1.10.2"

When I start to ping a router (resides on neutron node), I see traffic 
(unanswered arp requests to the gw) on br-int, but none on br-tun or eth2.

Compute1 can ping neutron (pings from 192.168.22.11 reach 192.168.22.2), 
but GRE seems dies somewhere before going to physical interface.

Please help, thanks.




More information about the Openstack mailing list