[openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

Zane Bitter zbitter at redhat.com
Fri Jun 29 18:41:12 UTC 2018


Now that the project is set up, let's tag future messages on this topic 
with [service-broker]. Here's one to start with that will help you find 
everything:

http://lists.openstack.org/pipermail/openstack-dev/2018-June/131923.html

cheers,
Zane.

On 05/06/18 12:19, Zane Bitter wrote:
> I've been doing some investigation into the Service Catalog in 
> Kubernetes and how we can get OpenStack resources to show up in the 
> catalog for use by applications running in Kubernetes. (The Big 3 public 
> clouds already support this.) The short answer is via an implementation 
> of something called the Open Service Broker API, but there are shortcuts 
> available to make it easier to do.
> 
> I'm convinced that this is readily achievable and something we ought to 
> do as a community.
> 
> I've put together a (long-winded) FAQ below to answer all of your 
> questions about it.
> 
> Would you be interested in working on a new project to implement this 
> integration? Reply to this thread and let's collect a list of volunteers 
> to form the initial core review team.
> 
> cheers,
> Zane.
> 
> 
> What is the Open Service Broker API?
> ------------------------------------
> 
> The Open Service Broker API[1] is a standard way to expose external 
> resources to applications running in a PaaS. It was originally developed 
> in the context of CloudFoundry, but the same standard was adopted by 
> Kubernetes (and hence OpenShift) in the form of the Service Catalog 
> extension[2]. (The Service Catalog in Kubernetes is the component that 
> calls out to a service broker.) So a single implementation can cover the 
> most popular open-source PaaS offerings.
> 
> In many cases, the services take the form of simply a pre-packaged 
> application that also runs inside the PaaS. But they don't have to be - 
> services can be anything. Provisioning via the service broker ensures 
> that the services requested are tied in to the PaaS's orchestration of 
> the application's lifecycle.
> 
> (This is certainly not the be-all and end-all of integration between 
> OpenStack and containers - we also need ways to tie PaaS-based 
> applications into the OpenStack's orchestration of a larger group of 
> resources. Some applications may even use both. But it's an important 
> part of the story.)
> 
> What sorts of services would OpenStack expose?
> ----------------------------------------------
> 
> Some example use cases might be:
> 
> * The application needs a reliable message queue. Rather than spinning 
> up multiple storage-backed containers with anti-affinity policies and 
> dealing with the overhead of managing e.g. RabbitMQ, the application 
> requests a Zaqar queue from an OpenStack cloud. The overhead of running 
> the queueing service is amortised across all of the applications in the 
> cloud. The queue gets cleaned up correctly when the application is 
> removed, since it is tied into the application definition.
> 
> * The application needs a database. Rather than spinning one up in a 
> storage-backed container and dealing with the overhead of managing it, 
> the application requests a Trove DB from an OpenStack cloud.
> 
> * The application includes a service that needs to run on bare metal for 
> performance reasons (e.g. could also be a database). The application 
> requests a bare-metal server from Nova w/ Ironic for the purpose. (The 
> same applies to requesting a VM, but there are alternatives like 
> KubeVirt - which also operates through the Service Catalog - available 
> for getting a VM in Kubernetes. There are no non-proprietary 
> alternatives for getting a bare-metal server.)
> 
> AWS[3], Azure[4], and GCP[5] all have service brokers available that 
> support these and many more services that they provide. I don't know of 
> any reason in principle not to expose every type of resource that 
> OpenStack provides via a service broker.
> 
> How is this different from cloud-provider-openstack?
> ----------------------------------------------------
> 
> The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself 
> to access features of the cloud to provide its service. For example, if 
> k8s needs persistent storage for a container then it can request that 
> from Cinder through cloud-provider-openstack[7]. It can also request a 
> load balancer from Octavia instead of having to start a container 
> running HAProxy to load balance between multiple instances of an 
> application container (thus enabling use of hardware load balancers via 
> the cloud's abstraction for them).
> 
> In contrast, the Service Catalog interface allows the *application* 
> running on Kubernetes to access features of the cloud.
> 
> What does a service broker look like?
> -------------------------------------
> 
> A service broker provides an HTTP API with 5 actions:
> 
> * List the services provided by the broker
> * Create an instance of a resource
> * Bind the resource into an instance of the application
> * Unbind the resource from an instance of the application
> * Delete the resource
> 
> The binding step is used for things like providing a set of DB 
> credentials to a container. You can rotate credentials when replacing a 
> container by revoking the existing credentials on unbind and creating a 
> new set on bind, without replacing the entire resource.
> 
> Is there an easier way?
> -----------------------
> 
> Yes! Folks from OpenShift came up with a project called the Automation 
> Broker[8]. To add support for a service to Automation Broker you just 
> create a container with an Ansible playbook to handle each of the 
> actions (create/bind/unbind/delete). This eliminates the need to write 
> another implementation of the service broker API, and allows us to 
> simply write Ansible playbooks instead.[9]
> 
> (Aside: Heat uses a comparable method to allow users to manage an 
> external resource using Mistral workflows: the 
> OS::Mistral::ExternalResource resource type.)
> 
> Support for accessing AWS resources through a service broker is also 
> implemented using these Ansible Playbook Bundles.[3]
> 
> Does this mean maintaining another client interface?
> ----------------------------------------------------
> 
> Maybe not. We already have per-project Python libraries, (deprecated) 
> per-project CLIs, openstackclient CLIs, openstack-sdk, shade, Heat 
> resource plugins, and Horizon dashboards. (Mistral actions are generated 
> automatically from the clients.) Some consolidation is already planned, 
> but it would be great not to require projects to maintain yet another 
> interface.
> 
> One option is to implement a tool that generates a set of playbooks for 
> each of the resources already exposed (via shade) in the OpenStack 
> Ansible modules. Then in theory we'd only need to implement the common 
> parts once, and then every service with support in shade would get this 
> for free. Ideally the same broker could be used against any OpenStack 
> cloud (so e.g. k8s might be running in your private cloud, but you may 
> want its service catalog to allow you to connect to resources in one or 
> more public clouds) - using shade is an advantage there because it is 
> designed to abstract the differences between clouds.
> 
> Another option might be to write or generate Heat templates for each 
> resource type we want to expose. Then we'd only need to implement a 
> common way of creating a Heat stack, and just have a different template 
> for each resource type. This is the approach taken by the AWS playbook 
> bundles (except with CloudFormation, obviously). An advantage is that 
> this allows Heat to do any checking and type conversion required on the 
> input parameters. Heat templates can also be made to be fairly 
> cloud-independent, mainly because they make it easier to be explicit 
> about things like ports and subnets than on the command line, where it's 
> more tempting to allow things to happen in a magical but cloud-specific 
> way.
> 
> I'd prefer to go with the pure-Ansible autogenerated way so we can have 
> support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] 
> brokers they have 10, 11 and 17 services respectively, so arguably we 
> could get a comparable number of features exposed without investing 
> crazy amounts of time if we had to write templates explicitly.
> 
> How would authentication work?
> ------------------------------
> 
> There are two main deployment topologies we need to consider: Kubernetes 
> deployed by an OpenStack tenant (Magnum-style, though not necessarily 
> using Magnum) and accessing resources in that tenant's project in the 
> local cloud, or accessing resources in some remote OpenStack cloud.
> 
> We also need to take into account that in the second case, the 
> Kubernetes cluster may 'belong' to a single cloud tenant (as in the 
> first case) or may be shared by applications that each need to 
> authenticate to different OpenStack tenants. (Kubernetes has 
> traditionally assumed the former, but I expect it to move in the 
> direction of allowing the latter, and it's already fairly common for 
> OpenShift deployments.)
> 
> The way e.g. the AWS broker[3] works is that you can either use the 
> credentials provisioned to the VM that k8s is installed on (a 'Role' in 
> AWS parlance - note that this is completely different to a Keystone 
> Role), or supply credentials to authenticate to AWS remotely.
> 
> OpenStack doesn't yet support per-instance credentials, although we're 
> working on it. (One thing to keep in mind is that ideally we'll want a 
> way to provide different permissions to the service broker and 
> cloud-provider-openstack.) An option in the meantime might be to provide 
> a way to set up credentials as part of the k8s installation. We'd also 
> need to have a way to specify credentials manually. Unlike for 
> proprietary clouds, the credentials also need to include the Keystone 
> auth_url. We should try to reuse openstacksdk's clouds.yaml/secure.yaml 
> format[10] if possible.
> 
> The OpenShift Ansible Broker works by starting up an Ansible container 
> on k8s to run a playbook from the bundle, so presumably credentials can 
> be passed as regular k8s secrets.
> 
> In all cases we'll want to encourage users to authenticate using 
> Keystone Application Credentials[11].
> 
> How would network integration work?
> -----------------------------------
> 
> Kuryr[12] allows us to connect application containers in Kubernetes to 
> Neutron networks in OpenStack. It would be desirable if, when the user 
> requests a VM or bare-metal server through the service broker, it were 
> possible to choose between attaching to the same network as Kubernetes 
> pods, or to a different network.
> 
> 
> [1] https://www.openservicebrokerapi.org/
> [2] https://kubernetes.io/docs/concepts/service-catalog/
> [3] https://github.com/awslabs/aws-servicebroker#aws-service-broker
> [4] 
> https://github.com/Azure/open-service-broker-azure#open-service-broker-for-azure 
> 
> [5] 
> https://github.com/GoogleCloudPlatform/gcp-service-broker#cloud-foundry-service-broker-for-google-cloud-platform 
> 
> [6] 
> https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#remove-cloud-provider-code-from-kubernetes-core 
> 
> [7] 
> https://github.com/kubernetes/cloud-provider-openstack#openstack-cloud-controller-manager 
> 
> [8] http://automationbroker.io/
> [9] https://docs.openshift.org/latest/apb_devel/index.html
> [10] 
> https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html#config-files 
> 
> [11] 
> https://docs.openstack.org/keystone/latest/user/application_credentials.html 
> 
> [12] 
> https://docs.openstack.org/kuryr/latest/devref/goals_and_use_cases.html




More information about the OpenStack-dev mailing list