[openstack-dev] [octavia] enabling new topologies

Sergey Guenender GUENEN at il.ibm.com
Sun May 29 14:12:23 UTC 2016


I'm working with the IBM team implementing the Active-Active N+1 topology 
[1].

I've been commissioned with the task to help integrate the code supporting 
the new topology while a) making as few code changes and b) reusing as 
much code as possible.

To make sure the changes to existing code are future-proof, I'd like to 
implement them outside AA N+1, submit them on their own and let the AA N+1 
base itself on top of it.

--TL;DR--

what follows is a description of the challenges I'm facing and the way I 
propose to solve them. Please skip down to the end of the email to see the 
actual questions.

--The details--

I've been studying the code for a few weeks now to see where the best 
places for minimal changes might be.

Currently I see two options:

   1. introduce a new kind of entity (the distributor) and make sure it's 
being handled on any of the 6 levels of controller worker code (endpoint, 
controller worker, *_flows, *_tasks, *_driver)

   2. leave most of the code layers intact by building on the fact that 
distributor will inherit most of the controller worker logic of amphora


In Active-Active topology, very much like in Active/StandBy:
* top level of distributors will have to run VRRP
* the distributors will have a Neutron port made on the VIP network
* the distributors' neutron ports on VIP network will need the same 
security groups
* the amphorae facing the pool member networks still require
    * ports on the pool member networks
    * "peers" HAProxy configuration for real-time state exchange
    * VIP network connections with the right security groups

The fact that existing topologies lack the notion of distributor and 
inspecting the 30-or-so existing references to amphorae clusters, swayed 
me towards the second option.

The easiest way to make use of existing code seems to be by splitting 
load-balancer's amphorae into three overlapping sets:
1. The front-facing - those connected to the VIP network
2. The back-facing - subset of front-facing amphorae, also connected to 
the pool members' networks
3. The VRRP-running - subset of front-facing amphorae, making sure the VIP 
routing remains highly available

At the code-changes level
* the three sets can be simply added as properties of 
common.data_model.LoadBalancer
* the existing amphorae cluster references would switch to using one of 
these properties, for example
    * the VRRP sub-flow would loop over only the VRRP amphorae
    * the network driver, when plugging the VIP, would loop over the 
front-facing amphorae
    * when connecting to the pool members' networks, 
network_tasks.CalculateDelta would only loop over the back-facing amphorae

In terms of backwards compatibility, Active-StandBy topology would have 
the 3 sets equal and contain both of its amphorae.

An even more future-proof approach might be to implement the sets-getters 
as selector methods, supporting operation on subsets of each kind of 
amphorae. For instance when growing/shrinking back-facing amphorae 
cluster, only the added/removed ones will need to be processed.

Finally (thank you for your patience, dear reader), my question is: if any 
of the above makes sense, and to facilitate the design/code review, what 
would be the best way to move forward?

Should I create a mini-blueprint describing the changes and implement it?
Should I just open a bug for it and supply a fix?

Thanks,
-Sergey.

[1] https://review.openstack.org/#/c/234639

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160529/d5744feb/attachment.html>


More information about the OpenStack-dev mailing list