[openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s
chris at openstack.org
Thu Mar 15 22:44:41 UTC 2018
As I've been working more in the Kubernetes community, I've been evaluating the
different points of integration between OpenStack services and the Kubernetes
application platform. One of the weaker points of integration has been in using
the OpenStack LBaaS APIs to create load balancers for Kubernetes applications.
Using this as a framing device, I'd like to begin a discussion about the
general development, deployment, and usage of the LBaaS API and how different
parts of our community can rally around and strengthen the API in the coming
I'd like to note right from the beginning that this isn't a disparagement of
the fantastic work that's being done by the Octavia team, but rather an
evaluation of the current state of the API and a call to our rich community of
developers, cloud deployers, users, and app developers to help move the API to
a place where it is expected to be present and shows the same level of
consistency across deployments that we see with the Nova, Cinder, and Neutron
core APIs. The seed of this discussion comes from my efforts to enable
third-party Kubernetes cloud provider testing, as well as discussions with the
Kubernetes-SIG-OpenStack community in the #sig-openstack Slack channel in the
Kubernetes organization. As a full disclaimer, my recounting of this discussion
represents my own impressions, and although I mention active participants by
name I do not represent their views. Any mistakes I make are my own.
To set the stage, Kubernetes uses a third-party load-balancer service (either
from a Kubernetes hosted application or from a cloud-provider API) to
provide high-availability for the applications it manages. The OpenStack
provider offers a generic interface to the LBaaSv2, with an option to enable
Octavia instead of the Neutron API. The provider is build off of the
GopherCloud SDK. In my own efforts to enable testing of this provider, I'm
using Terraform to orchestrate the K8s deployment and installation. Since I
needed to use a public cloud provider to turn this automated testing over to a
third party, I chose Vexxhost, as they have been generous donors in this effort
for the the CloudLab efforts in general, and have provided tremendous support
in debugging problems I've run in to. The first major issue I ran in to was a
race condition in using the Neutron LBaaSv2 API. It turns out that with
Terraform, it's possible to tear down resources in a way that causes Neutron to
leak administrator-privileged resources that can not be deleted by a
non-privileged users. In discussions with the Neutron and Octavia teams, it was
strongly recommended that I move away from the Neutron LBaaSv2 API and instead
adopt Octavia. Vexxhost graciously installed Octavia and my request and I was
able to move past this issue.
This raises a fundamental issue facing our community with regards to the load
balancer APIs: there is little consistency as to which API is deployed, and we
have installations that still deploy on the LBaaSv1 API. Indeed, the OpenStack
User Survey reported in November of 2017 that only 7% of production
installations were running Octavia. Meanwhile, Neutron LBaaSv1 was deprecated
in Liberty, and Neutron LBaaSv2 was recently deprecated in the Queens release.
The lack of a migration path from v1 to v2 helped to slow adoption, and the
additional requirements for installing Octavia has also been a factor in
increasing adoption of the supported LBaaSv2 implementation.
This highlights the first call to action for our public and private cloud
community: encouraging the rapid migration from older, unsupported APIs to
Because of this wide range of deployed APIs, I changed my own deployment code
to launch a user-space VM and install a non-tls-terminating Nginx load balancer
for my Kubernetes control plane. I'm not the only person who has adopted an
approach like this. In the #sig-openstack channel, Saverio Proto (zioproto)
discussed how he uses the K8s Nginx ingress load balancer in favor of the
OpenStack provider load balancer. My take away from his description is that it's
preferable to use the K8s-based ingress load balancer because:
* The common LBaaSv2 API does not support TLS termination.
* You don't need provision an additional virtual machine.
* You aren't dependent on an appropriate and supported API being available on
German Eichberger (xgerman) and Adam Harwell (rm_you) from the Octavia team
were present for the discussion, and presented a strong case for using the
Octavia APIs. My take away was:
* Octavia does support TLS termination, and it's the dependence on the
Neutron API that removes the ability to take advantage of it.
* It provides a lot more than just a "VM with haproxy", and has stability
This highlights a second call to action for the SDK and provider developers:
recognizing the end of life of the Neutron LBaaSv2 API and adding
support for more advanced Octavia features.
As part of the discussion, we also talked about facilitating adoption by
improving the installation experience. My take away was that this is an active
development goal for the Octavia Rocky release, leading to my third call to
action: improving the installation and upgrade experience for the Octavia LBaaS
APIs to help with adoption by both deployers and developers.
To quote myself from that discussion: "As with any open source project, I want
users to have a choice in what they want to use. Having provider code that
gives a reliable user experience is critical. The Nova, Neutron, and Cinder
APIs are very stable and expected to be present in every cloud. It's why the
compute parts of the provider work really well. To me the shifting landscape of
the LBaaSv2 APIs makes it difficult for users to rely on having a consistent
and reliable experience. That doesn't mean we give up on trying to make that
experience positive, though. It’s an issue where the Octavia devs, public
clouds, deployment tools, and provider authors should be collaborating to make
the experience consistent and reliable."
For the short term, despite us having an upstream implementation in the
OpenStack cloud provider, my plan is to steer users more in the direction of
K8s-based solutions, primarily to help them have a consistent experience.
However, I feel that a longer-term goal of the SIG-K8s should be in encouraging
the adoption of Octavia and improving the provider implementation.
More information about the OpenStack-dev