[openstack-dev] [kubernetes][kolla]

Ryan Hallisey rhallise at redhat.com
Thu May 26 17:50:36 UTC 2016


I think the community will want to split apart the CLI to run tasks.  This was an idea being thrown around at the same time
as the ectd addition.  This would give the operator the ability to like you said, to skip any task that isn't required.

Using etcd is a way for the operator to guarantee that a bootstrapping task can run without another service interrupting it.
The goal is to try and make use of the Kubernetes like workflow as much as possible.  I agree, the community should avoid
automagic setup.  It can lead to a lot of dangerous corner cases. I think Kolla learned this lesson way back during the
compose era.

The tasks are define as:
  - bootstrap <all>/<service>
  - deploy <all>/<service>

Any further workflow tweaking could be handled by contacting etcd.  The community could also break down the tasks further
if there is a use case for it.

Thanks,
Ryan

----- Original Message -----
From: "Kevin M Fox" <Kevin.Fox at pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Thursday, May 26, 2016 11:41:12 AM
Subject: Re: [openstack-dev] [kubernetes]

Two issues I can see with that approach.

1. It needs to be incredibly well documented as well as tools provided to update states in etcd manually when an op needs to recover from things partially working.
2. Consider the case where an op has an existing cloud. He/She installs k8s on their existing control plane, and then one openstack service at a time wants to "upgrade" the system from non container to containers. If the user wants to do so, with the jobs method, the op just skips the bootstrap jobs. With magic baked into the containers and etcd, the same kinds of things in issue #1 needs fixing in etcd so it doesn't try and reinit things. This makes it harder to get clouds migrated to kolla-k8s.

I know the idea is to try and simplify deployment by making the containers do all the initing automagically. but I'm afraid that just sweeps issues under the rug, out of the light where they still will come up, but more unexpectedly. The ops still need to understand the automagic that is happening. As an Op, I'd rather it be explicit, out front, where I know its happening, and I can easily tweak the workflow when necessary to get out of a bind.

Thanks,
Kevin
________________________________________
From: Ryan Hallisey [rhallise at redhat.com]
Sent: Thursday, May 26, 2016 5:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kubernetes]

Thanks for the feedback Kevin.

The community has been investigation other options this week.  The option that is currently being looked at involves
using etcd to provide a locking mechanic so that services in the cluster are aware bootstrapping is underway.

The concept involves extending kolla's dockerfiles and having them poll etcd to determine whether a bootstrap is in progress or complete [1].

I'll follow up by adding this to the spec.

Thanks,
Ryan

[1] - https://review.openstack.org/#/c/320744/

----- Original Message -----
From: "Kevin M Fox" <Kevin.Fox at pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Monday, May 23, 2016 11:37:33 AM
Subject: Re: [openstack-dev] [kubernetes]

+1 for using k8s to do work where possible.

-1 for trying to shoehorn a feature in so that k8s can deal with stuff its not ready to handle. We need to ensure Operators have everything they need in order to successfully operate their cloud.

The current upgrade stuff in k8s is focused around replacing one, usually stateless, thing for another. It never had Database Schema upgrades in mind.  It is great to use for minor version bumps. It is insufficient for major OpenStack upgrades. If you follow the OpenStack release notes, they tend to be quite linear, in a workflow. K8s isn't designed for that. Hence the need for a tool outside of k8s to drive the creation/upgrading of Deployments and Jobs in the proper order.

Init containers also do not look like a good fit. As far as I can gather from the spec, they are intended to init something on a node when a pod is spawned. This is a very different thing from upgrading a shared database's schema. I don't believe they should be used for that.

I've upgraded many OpenStack clouds over the years. One of the things that has bit me from time to time is a failed schema update and having to tweak code and then rerun schema upgrades. This will continue to happen and needs to be covered. The Job's workflow as discussed in the spec allows an operator to do just that. Hiding it in an init container makes that much harder for Operators.

As an Op, we need the ability to tweak the workflow as needed and run/rerun only the pieces that we need.

Thanks,
Kevin
________________________________________
From: Ryan Hallisey [rhallise at redhat.com]
Sent: Sunday, May 22, 2016 12:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev]  [kolla][kolla-kubernetes][kubernetes]

Hi all,

At the Kolla meeting last week, I brought up some of the challenges around the bootstrapping
process in Kubernetes.  The main highlight revolved around how the bootstrapping process will
work.

Currently, in the kolla-kubernetes spec [1], the process for bootstrapping involves
outside orchestration running Kubernetes 'Jobs' that will handle the database initialization,
creating users, etc...  One of the flaws in this approach, is that kolla-kubernetes can't use
native Kubernetes upgrade tooling. Kubernetes does upgrades as a single action that scales
down running containers and replaces them with the upgraded containers. So instead of having
Kubernetes manage the upgrade, it would be guided by an external engine.  Not saying this is
a bad thing, but it does loosen the control Kubernetes would have over stack management.

Kubernetes does have some incoming new features that are a step in the right direction to
allow for kolla-kubernetes to make complete use of Kubernetes tooling like init containers [2].
There is also the introduction to wait.for conditions in the kubectl [3].

       kubectl get pod my-pod --wait --wait-for="pod-running"

Upgrades will be in the distant future for kolla-kubernetes, but I want to make sure the
community maintains an open mind about bootstrap/upgrades since there are potentially many
options that could come down the road.

I encourage everyone to add your input to the spec!

Thanks,
Ryan

[1] SPEC - https://review.openstack.org/#/c/304182/
[2] Init containers - https://github.com/kubernetes/kubernetes/pull/23567
[3] wait.for kubectl - https://github.com/kubernetes/kubernetes/issues/1899

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list