[openstack-dev] [Quantum][Networking] Attaching instances to multiple physical host networks

Robert Collins robertc at robertcollins.net
Fri Jun 7 04:14:07 UTC 2013


On 7 June 2013 10:35, Brian Cline <bcline at softlayer.com> wrote:
> Howdy,


> So far I’ve tried a few different scenarios:
>
>   (a) using OVS with a gre network type for public and private, which was
> able to do everything I need except reach the other 10.x subnets, as all
> non-local packets are being sent out over eth1/br-ext (public). From there I
> can’t figure out the missing piece to get any other non-Quantum 10.0.0.0/8
> packets to go out on br-priv/eth0;
>
>
>
>   (b) using OVS with a flat external network type for private and keeping
> the existing gre for public; it seems this worked on the private network,
> but unfortunately it takes an instance over 1.5 hours to boot to test
> because of pretty ridiculous wget timeouts on the cloud-init-like bits in
> CirrOS. Still waiting on my first test to come up.


>
> I’ve scoured every bit of documentation I can find, and this sort of setup
> seems elusive, but I can’t be the only one that’s needed to do this. Is
> there something major I’ve missed here? Glad to post more details on
> configuration, just want to see what upfront questions there are first.


There are severals ways I know of to do this.
But first lets look at the network setup:
public space - default route
openstack instance space - 10.b/*
metadata access - 169.254.169.254
other private resources - 10/8

Implementation wise, you can either aim to avoid any special config
within instances, or have instance specific config. Note that since
metadata access happens very early, if you have config within
instances then you either need custom images / file injection, or you
need to constrain the special config to non-metadata access.

If you want Quantum to arbitrate access to the public/private space
then you can run with one nic in your vm's. Otherwise you can make the
public network and the private networks be separate vnics. Assuming
you want to run simple stock cloud images, you need the router on your
default route in instances to handle metadata requests.

So, this looks like:
vnic 0 - public address space
vnic 0 default router DNAT's 169.254.169.254 -> your nova API metadata endpoint
vnic 1 - openstack instance space
vnic 1 default router has access to 10.*
You'd need to manually fixup your route table in instances - they
would boot ok, but both vnic0 and vnic1 would be getting offered
default routes. Or you could run a dynamic routing protocol on each
node, but both have equivalent complexity, you need to fiddle in your
instances :)

Or:
vnic 0 - openstack address space
default router DNATs 169.254.169.254
default router forwards traffic to private ranges
default router SNATs public traffic,

To do this config, I would create two networks in quantum -
public[flat provider], private[flat provider], openstack[gre overlay];
create a router and add all three networks to it. Disable dhcp on the
two exterior networks. When booting instances ask for two nics, one on
the openstack space and one public for the first scenario, or just one
on the openstack network [which you should make public to have it be
the default]. Setup appropriate routes on the network node to separate
out private and public traffic - though the 10/8 subnet on the public
network should be sufficient.

In both cases, you need to make your quantum node be running the
metadata agent, have the quantum metadata agent enabled - check it's
all working by looking for the quantum-ns-metadata-proxy which should
be running, and handling intercepted requests across via a named pipe.

HTH,
Rob



-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Cloud Services



More information about the OpenStack-dev mailing list