[openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

Salvatore Orlando sorlando at nicira.com
Mon Mar 23 15:52:41 UTC 2015

I think that moving the discussion in whether a pool represents a tenant's
routable address space, or whether we need a new (another?!) API entity do
deal with it probably does not really fall within the scope of this thread.
I am pretty sure Carl will soon push a specification for address scope
management - and then gerrit would be the place to debate the details.

Neutron does not refuse the idea of using tenants as a logical unit of
However the tenant is not the only way of isolating resources. In this
thread we saw people quoting examples where within the same tenant several
isolated (from a network perspective) environments are implemented. This
are usually L3 domains made by one or more L2 segments connected to a
logical router; external network connectivity might or might not be achieve
using NAT.

I think the goal of subnet pools is to use these environments as "units of
isolations" and ensure no overlapping CIDRs there. However, since there is
no way to identify such environments at the API layers, API clients will
need to be diligent and follow a workflow where they create a subnetpool,
and then for every subnet in a given L3 domain they alway allocate from the
same pool. This is probably not ideal from a usability perspective, but
anyway enables a new use case.
Since (in my opinion) in the vast majority each tenant only deploy one of
such L3 domains, having a 1:1 association between a tenant and subnet pool
might be a decent idea. However, this will break the use cases previously
presented in this thread.

The Neutron API, like all OpenStack APIs, has the sweet burden of having to
being extremely flexible: you need to able to implement something - and the
also the exact opposite. What becomes hard here is to ensure the API stays
flexible and usable at the same time. I've thought a bit about this and I
have some ideas, obviously with no pretence to translate any of them in
code for the Kilo release:

1- each tenant has a default subnet pool, just like each tenant has a
default security group. Subnets are by default always allocated by the
pool, unless the user explicitly decides not to (e.g.: by explicitly
passing null to the subnet_pool attribute - this is just an example based
on the merged API don't take it as a proposal!!!) - or unless another pool
is explicitly specified. A user indeed can then create additional subnet
pools for its tenant; shared pools set up by the cloud operators can also
be used.

This will enable users to automatically leverage subnetpools. For
deployments were multiple L3 isolation environments exist, this will
however represent a backward incompatible change. Indeed client apps will
have to change there as they will either need to make explicit use of pools
or explicitly disable usage of a subnet pool.
However, how prefixes for the default subnet pool should be selected is a
kind of open question. Perhaps this could be sorted at configuration level.

2- Subnet pools are read only for all tenants. The operator will then
decide whether per-tenant pools should be enabled, and decide what the
prefixes for the default subnet pool should be.The operator will also set
up one or more "shared" pools.
 A tenant will be able to read details of such pools (its own default pool
and the shared ones), and allocate from them, but won't be able to make any
change. Tenants won't be able to create new pools.

This is probably simple, but has two drawbacks. The first, and possible
less important, is that it won't be possible to implement use cases with
multiple domains per tenant in pool-enabled deployments. As a corollary,
the second issue - a show stopper in my opinion - is that the Neutron API
will become even more dependent on the deployment. I therefore do not think
this is an option we really want to pursue.

I realize the ideas listed above are probably terrible. I am sharing them
indeed in the hope that they will induce somebody else to come up with
better ideas ;)


On 23 March 2015 at 16:41, Jay Pipes <jaypipes at gmail.com> wrote:

> On Sun, Mar 22, 2015 at 05:05:17PM -0700, Ian Wells wrote:
> > On 22 March 2015 at 07:48, Jay Pipes <jaypipes at gmail.com> wrote:
> >
> > > On 03/20/2015 05:16 PM, Kevin Benton wrote:
> > >
> > >> To clarify a bit, we obviously divide lots of things by tenant
> (quotas,
> > >> network listing, etc). The difference is that we have nothing right
> now
> > >> that has to be unique within a tenant. Are there objects that are
> > >> uniquely scoped to a tenant in Nova/Glance/etc?
> > >>
> > >
> > > Yes. Virtually everything is :)
> >
> >
> > Everything is owned by a tenant.  Very few things are one per tenant,
> where
> > is where this feels like it's leading.
> Ah, sorry, yes, I misunderstood Kevin's implication there. That is
> correct. Security group names are, AFAIK, the only thing in Nova that is
> unique within a tenant.
> All other resources are identified via UUID, and are not unique within a
> tenant (project).
> > Seems to me that an address pool corresponds to a network area that you
> can
> > route across (because routing only works over a network with unique
> > addresses and that's what an address pool does for you).  We have those
> > areas and we use NAT to separate them (setting aside the occasional
> > isolated network area with no external connections).  But NAT doesn't
> > separate tenants, it separates externally connected routers: one tenant
> can
> > have many of those routers, or one router can be connected to networks in
> > both tenants.  We just happen to frequently use the one external router
> per
> > tenant model, which is why address pools *appear* to be one per tenant.
> I
> > think, more accurately, an external router should be given an address
> pool,
> > and tenants have nothing to do with it.
> Gotcha. Yep, that makes total sense.
> Best,
> -jay
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150323/118e6450/attachment.html>

More information about the OpenStack-dev mailing list