[openstack-dev] [octavia] Octavia PTG summary - Octavia team discussions (e-mail 2 of 2)
johnsomor at gmail.com
Tue Mar 7 01:10:21 UTC 2017
Some of the Octavia team attended the first OpenStack Project Team Gathering
(PTG) held in Atlanta the week of February 27th. Below is a summary of the
notes we kept in the Octavia etherpad here:
This e-mail details discussions we had about Octavia specific topics. A
second e-mail will cover topics the Octavia team discussed with the
* This is a priority for Pike. Cores will be actively reviewing these
* We need to make sure there is good velocity for comments getting
Amphora in containers
* We need to work on this in Pike. We cannot implement upgrade tests
under the current gate host limitations without containers being available.
* Currently nova-lxd looks to be the shortest path for octavia.
* It was noted that Docker can now support hot-plugging networks.
* There is community interest in using Docker for the amphora.
* There was interest in booting amphora via kubernetes, but we need to
research the implications and OpenStack support more.
* We discussed Octavia's ongoing use of barbican and our need for a
cascading ACL API. This will greatly simplify the user experience for TLS
offloading load balancers via Octavia. I captured the need in a barbican
bug here: https://bugs.launchpad.net/barbican/+bug/1666963
* The barbican team requested that we do a bug scrub through the barbican
bugs to pull out any LBaaS/Octavia bugs that may be mis-filed or out of
date. Michael (johnsom) will do that.
* Currently using neutron-lbaas with kubernetes, but planning to move to
* Interested to use the L7 capability to save public IPs.
* Documentation is a priority for Pike!
* API-REF is work in progress (johnsom):
* API-guide we are deferring until after Pike
* Upgrade guide is targeted for Pike (diltram/johnsom)
* HA guide is targeted for Pike (diltram/johnsom)
* OpenStack Ansible guide is targeted for Pike (xgerman)
* Detailed setup guide is targetd for Pike (KeithMnemonic)
* Admin guide we are deferring until after the documentation team spins it
out into the project repositories (https://review.openstack.org/439122)
* Networking guide - discuss with john-davidge about a link out to Octavia
* Developer guide started, maybe Pike? (sindhu):
* OpenStack Client guide - is this autogenerated?
* User guide - Cookbooks kind of cover this, defer additional work until
* Troubleshooting guide - Pike? (KeithMnemonic/johnsom)
* Monitoring guide - defer until after Pike
* Operator guide?
* The dragonflow team wanted to meet with the Octavia team to discuss
integration points. We talked about how dragonflow could be used with
Octavia and how offset style L7 rules might be implemented with dragonflow.
* We gave an overview of how a dragonflow load balancing driver could be
added to Octavia.
* We also asked if dragonflow would have the same issues as DVR does with
VRRP and provided a use case.
Drivers / Providers
* Octavia provider support will be implemented via the named extension
* Providers will be implemented as handlers like the existing noop and
Octavia driver are implemented.
* Change in the interaction model with barbican (octavia service account
is authorized for barbican containers and content) means that octavia will
need to pass the required secure content to the providers. This may be
different than how vendor drivers handled this content in neutron-lbaas, but
the old model should still be available to the vendor if they choose to not
adopt the new parameters passed to the handler.
* Octavia will implement a new endpoint spawned from the health manager
processes that will receive statistics, status, and agent health information
from vendor agents or drivers. This will establish a stable and scalable
interface for drivers to update this information.
o Proposed using a similar UDP endpoint to how the amps are reporting
o Would use a per-provider key for signing, seeded with the agent ID
o An method for agents to query the list of available agent health
endpoints will need to be created to allow the vendor agents to update their
list of available endpoints.
* Octavia will add an agent database table similar to the neutron agents
* Vendor agents can update this information via the above mechanism
providing the operator with visibility to the vendor agent health.
* Octavia may include the octavia processes in the list of agents.
* We need to understand if any of the vendor drivers are considering using
the driver shim (neutron-lbaas translation layer) or if they are all going
to update to the octavia handler API.
* It appears that the flavors spec is stalled:
* The PTL wants more vendor input on the spec. F5 provided some comments
during the PTG.
* Is there developer resource to work on this in Pike?
* Decided to create octavia-dashboard for the go forward load balancing
horizon dashboard plugin. This will initially be a copy of the
neutron-lbaas dashboard. I18N work from Ocata will apply directly to this
new dashboard repository. This should make the transition easier for
packagers and the Octavia team. (request for repository is in progress)
* Octavia-dashboard will be under cycle-with-milestones release cycle to
help I18n team and packagers.
* We need developer resources for the dashboard. The horizon team is
willing to help us, but does not have developer resources available.
o F5 may be able to help here, we will follow up in April
* L7 is missing from the current dashboard
* Documentation and installation procedures could be improved. We have
had reports of users struggling to install the dashboard and core reviewers
seeing it as functional.
* Operating and provisioning status is missing from some objects in the
* Some configuration attributes may be missing from the horizon panels.
* Low priority - Load balancers could be enhanced on the network topology
Legacy Namespace HAProxy driver
* The current implementation is highly intertwined with neutron and moving
the driver under the Octavia API will require significant work.
* It is currently being used by other teams even though it is not HA, etc.
* Plan is to create a new repository for this driver and implemented the
moved driver there.
* Once this driver is available mark both deprecated via email list,
configuration file, and log messages.
* If enough interest is raised for this driver it can be maintained as a
Neutron pass-through proxy
* Implemented as an alternate neutron plugin (lbaasv2-proxy).
* Up for review: https://review.openstack.org/418530
* We discussed the quotas implementation. When using the neutron-lbaas or
the pass-through proxy both neutron and octavia quotas will be applied.
* We will need to add a release note to discuss the behavior of quotas.
* It was proposed to set the neutron lbaas quotas in neutron to unlimited
* We will need to migrate any custom quotas that already exist in neutron
for LBaaS to octavia as part of the load balancer migration process.
Octavia performance enhancements
* Potential for a 3rd party gate testing with OVS-DPDK (diltram)
* We should make sure we have the virtio-dpdk NIC driver in the
amphora image (diltram)
* We should investigate the HAProxy roadmap for DPDK integration.
* Performance infra gate - investigate if there is OSIC QA work we can
* Advanced nova scheduling to collocate amphora with applications - defer
to after Pike
* bare metal running native amphora containers - defer to after Pike
* Ironic compute driver - defer to after Pike
* HAProxy running in multi-process mode
o Need to investigate if all of the limitations impacting octavia have
* Zero downtime configuration updates
* TLS offload
* Sticky table synchronization
* HAProxy CPU pinning
o Not clear this will help enough with HAProxy inside the VM (different
than nova CPU pinning)
* HAProxy kernel splicing
o Could be investigated to see if it is available in the distribution
versions of HAProxy and if it benefits performance.
o Investigate adding an SR-IOV group configuration option (diltram)
o This would be moved to flavors when they are available.
o All hosts in SR-IOV group should have access to all of the potential
VIP and member networks so amps are not required to be placed on specific
compute hosts. VIP and member networks can continue to hot plug
o We will not support placing amp on specific compute host based on all
networks ever needed for the amp at this time
* PCI Pass through - defer until after Pike for use cases (crypto, FPGA)
OpenStack Client (OSC)
* We decided to create a python-octaviaclient repository for our OSC
* We will not be creating a native "octavia" client at this time.
* Octavia OSC commands will look like:
o loadbalancer (create, etc.)
o loadbalancer listener (CRUD)
o loadbalancer pool (CRUD)
o loadbalancer member (CRUD)
o loadbalancer healthmonitor (CRUD)
o loadbalancer quota? (CRUD) -> can be included in the common quotas
o loadbalancer l7policy (CRUD)
o loadbalancer l7rule (CRUD)
o loadbalancer flavor
* We talked about how we could expose more health information to an
operator. This would be similar to the neutron "agent list" command. A
couple of potential commands came out of that discussion:
o loadbalancer cluster-status (show health of components)
* There was also discussion that we may want a command that exposes the
currently installed/available providers (drivers)
o loadbalancer provider list
* We are also prioritizing an admin function to failover amps and upgrade
load balancers. A CLI for this function was also proposed:
o loadbalancer failover
* Work has started on the OpenStack SDK:
* We did a quick overview of the types of tests in Octavia and what they
test: pep8/bandit, unit tests, functional, ddt, and tempest tests (see the
etherpad for the details).
* We reviewed the test gap analysis F5 worked on in Ocata.
* Proposed Queens community goal is to have tempest plugins in a separate
repository. octavia-tempest-plugin repository has already been requested.
* We highlighted that all new features need to be testable via the
* Our current tempest tests are based off of old tempest code/methods. We
need to move this code over to the new methods and use the stable tempest
* We should refactor our base test primitives to be more modular, i.e.
base methods for creating objects that can be combined into more complex
methods. For example, we should have a base method for "create a pool"
instead of just one method that creates a load balancer with a pool.
* We can use the new tempest repository for the refactored and stable
interface based tempest tests.
* F5 would like to help with improving the test situation in Octavia.
More information about the OpenStack-dev