[openstack-dev] [TripleO] isolated networks: teasing apart the overcloud traffic
Dan Prince
dprince at redhat.com
Tue May 12 14:34:53 UTC 2015
Hi,
I wanted to send an email out to introduce a new idea in TripleO about
how we can go about configuring isolated networks for our baremetal
Overcloud traffic. The thoughts here are largely about the ability to
split out the baremetal Overcloud traffic via our tooling (Heat
templates).
What TripleO does today:
>From an Overcloud configuration standpoint TripleO today is largely
built around a flat network we call the control plane. It acts as the
provisioning network. We run our tenant traffic in it via a GRE or VXLAN
tunnel. The internal API traffic runs on it. There is a lot going on on
this network. We've got the ability to wire in a separate public network
IP on a controller machines but besides this the network is largely a
single control plane.
What we would like:
We would like the ability to split out some of the traffic onto isolated
networks. Some examples: It would be nice to have a storage network that
is dedicated for storage traffic. This could be things all things Ceph,
Glance, and Swift related. An isolated Tenant network might be nice. Or
a dedicated network for all the internal API communication. These are
some examples... We would like some common cases to be well represented
upstream... but things should be very flexible with regards to both the
networks and how things are wired up to the physical NICs.
-------------
There is an etherpad [1] where some of us have been hacking around some
of these things. The developing set of patches allows us to:
1) Create some extra Neutron networks via Heat. These are largely used
for IPAM only to help assign the required IP address ranges via Heat
using Neutron::Port resources.
2) Ports resources which are represented (currently) as nested stacks.
These resources allow the network port assignments to be opt-in per
network. If you don't want a "tenant" network... no problem. Just don't
configure it in your resource registry and traffic will continue to run
on the ctlplane.
3) ServiceMap and NetworkIpMap resources that help us to assign the
correct IP addresses to each service. Using this approach we can for
example have the Neutron ML2 plugin set its local_ip to the correct
network thus diverting where traffic runs for the tenant network.
The end result looks something like this:
https://review.openstack.org/#/c/178716/
We will eventually want to do this for all all sorts of service
endpoints so their traffic can be isolated on a specific network.
Would love to hear feedback on the approach. I know it is probably late
but it would be nice to fit a talk about this into one of the TripleO
sessions at the summit as well.
Dan
[1] https://etherpad.openstack.org/p/tripleo-service-endpoints
More information about the OpenStack-dev
mailing list