[Openstack] Flat networking, L2 access and externally assigned IP addresses

Stuart Longland stuartl at vrt.com.au
Fri Sep 27 10:42:35 UTC 2013


Hi all,

I've got a silly question regarding the flat networking in OpenStack.
It looks like it'll do what we want, but I'd like to get some further
information.  Apologies if this has been answered, I did look, I didn't
find anything definitive.

We're in the process of building up a private cloud.  Currently, our
network looks much like in the attached diagram.  We've got currently:
two storage nodes, running the ceph-ods service; three management nodes
running ceph-ods, MariaDB+Galera, rabbitmq, Keystone, Glance, ... etc;
and a couple of compute nodes.

All are running Ubuntu 12.04 LTS AMD64, running Intel Core i3 3220T CPUs
at 2.8GHz.

The storage nodes have 8GB RAM, 2×3TB hard drives and a 60GB SSD and two
network interfaces on-board, which will be joined using LACP.
The management nodes have 8GB RAM and a 60GB SSD, and will possibly gain
10GbE network cards later for things like Glance.

The compute nodes have a 60GB SSD two on-board network cards and a third
PCIe network card: the two on-board ones will be joined using LACP for
back-end communications, and the third used as the "public" interface.

The Flat DHCP network manager looks like it'll be great for some of our
projects: not sure how it does its magic on IPv6 but on IPv4 (as I
understand it), the VMs get placed on a network that's private to the
node running them and the compute node simply routes the traffic,
performing NAT where necessary.

To make the VM accessible, a floating IP is assigned to the public
interface, and all connections are DNATed to the VM.  Again, not sure
how the voodoo works in IPv6-land where NAT doesn't exist, but I digress.

This seems fine, but for some of our intended applications, particularly
Samba filesharing comes to mind, it's best if the machine were directly
bridged onto the same Ethernet segment as the clients and is either
statically assigned an IP address decided by us network admins, or
receives one from DHCP.

As it happens, one of our legacy servers already fills the role of DHCP
server, handles dynamic DNS updates and works, so I'd like to use that.

Maybe I haven't uncovered the magic bit of documentation yet, but it
seems no matter what I do, OpenStack seems to want to be in-charge of
the IP addressing.

If I bridge the interface on the compute node that shares the Ethernet
segment with the DHCP server with the flat-network bridge (the docs call
this "br100"), I do indeed get our DHCP server responding to the
DHCPREQUESTs, the VM allegedly gets the reply, but then network falls
flat because there's some firewalling there that prevents the VM from
passing traffic on that IP address -- simply because it isn't the IP
that OpenStack was expecting the VM to use.

I just want to plug the VM into the VLAN via a bridge and let the VM
have full L2 access.

I've also tried tinkering with the firewall manager; choosing the
NoopFirewallManager just broke things, no connectivity whatsoever, even
though the VM was allegedly bridged to br100.

How does one give a guest full L2 network access to a bridge defined on
the host?

Regards,
-- 
Stuart Longland
Contractor
     _ ___
\  /|_) |                           T: +61 7 3535 9619
 \/ | \ |     38b Douglas Street    F: +61 7 3535 9699
   SYSTEMS    Milton QLD 4064       http://www.vrt.com.au
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cluster.png
Type: image/png
Size: 8955 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130927/e93b4c89/attachment.png>


More information about the Openstack mailing list