[openstack-dev] [tc] [all] TC Report 18-26

Fox, Kevin M Kevin.Fox at pnnl.gov
Wed Jul 4 00:18:19 UTC 2018


Replying inline in outlook. Sorry. :( Prefixing with KF>

-----Original Message-----
From: Jay Pipes [mailto:jaypipes at gmail.com] 
Sent: Tuesday, July 03, 2018 1:04 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

I'll answer inline, so that it's easier to understand what part of your 
message I'm responding to.

On 07/03/2018 02:37 PM, Fox, Kevin M wrote:
> Yes/no on the vendor distro thing. They do provide a lot of options, but they also provide a fully k8s tested/provided route too. kubeadm. I can take linux distro of choice, curl down kubeadm and get a working kubernetes in literally a couple minutes.

How is this different from devstack?

With both approaches:

* Download and run a single script
* Any sort of networking outside of super basic setup requires manual 
intervention
* Not recommended for "production"
* Require workarounds when running as not-root

Is it that you prefer the single Go binary approach of kubeadm which 
hides much of the details that devstack was designed to output (to help 
teach people what's going on under the hood)?

KF> so... go to https://docs.openstack.org/devstack/latest/ and one of the first things you see is a bright red Warning box. Don't run it on your laptop. It also targets git master rather then production releases so it is more targeted at developing on openstack itself rather then developers developing their software to run in openstack. My common use case was developing stuff to run in, not developing openstack itself. minikube makes this case first class. Also, it requires a linux box to deploy it. Minikube works on macos and windows as well. Yeah, not really an easy thing to do, but it does it pretty well. I did a presentation on Kubernetes once, put up a slide on minikube, and 5 slides later, one of the physicists in the room said, btw, I have it working on my mac (personal laptop). Not trying to slam devstack. It really is a good piece of software. but it still has a ways to go to get to that point. And lastly, minikube's default bootstrapper these days is kubeadm. So the kubernetes you get to develop against is REALLY close to one you could deploy yourself at scale in vms or on bare metal. The tools/containers it uses are byte identical. They will behave the same. Devstack is very different then most production deployments.

 >
  No compiling anything or building containers. That is what I mean when 
I say they have a product.

What does devstack compile?

By "compile" are you referring to downloading code from git 
repositories? Or are you referring to the fact that with kubeadm you are 
downloading a Go binary that hides the downloading and installation of 
all the other Kubernetes images for you [1]?

KF> The go binary orchestrates a bit, but for the most part, you get one system package installed (or use one statically linked binary) kubelet. From there, you switch to using prebuilt containers for all the other services. Those binaries have been through a build / test/ release pipeline and are guaranteed to be the same between all the nodes you install them on. It is easy to run a deployment on your test cluster, and ensure it works the same way on your production system. You can do the same with say rpms, but then you need to build up plumbing to mirror your rpms and plumbing to promote from testing to production, etc. Then you have to configure all the nodes to not accidently pull from a remote rpm mirror. Some of the system updates try really hard to reenable that. :/ K8s gives you easy testing/promotion by the way they tag things and prebuild stuff for you. So you just tweak your k8s version and off go. You don't have to mirror if you don't want to. Lower barrier to entry there.

[1] 
https://github.com/kubernetes/kubernetes/blob/8d73473ce8118422c9e0c2ba8ea669ebbf8cee1c/cmd/kubeadm/app/cmd/init.go#L267
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/images/images.go#L63

 > Other vendors provide their own builds, release tooling, or config 
management integration. which is why that list is so big.  But it is up 
to the Operators to decide the route and due to k8s having a very clean, 
easy, low bar for entry it sets the bar for the other products to be 
even better.

I fail to see how devstack and kubeadm aren't very much in the same vein?

KF> You've switched from comparing devstack and minikube to devstack and kubeadm. Kubeadm is plumbing to build dev, test, and production systems. Devstack is very much only ever intended for the dev phase. And like I said before, a little more focused on the dev of openstack itself, not of deving code running in it. Minikube is really intended to allow devs to develop software to run inside k8s and behave as much as possible to a full k8s cluster.

> The reason people started adopting clouds was because it was very quick to request resources.  One of clouds features (some say drawbacks) vs VM farms has been ephemeralness. You build applications on top of VMs to provide a Service to your Users. Great. Things like Containers though launch much faster and have generally more functionality for plumbing them together then VMs do though.

Not sure what this has to do with what we've been discussing.

KF> We'll skip it for now... Maybe it will become a little more clear in the context of other responses below.

 >
  So these days containers are out clouding vms at this use case. So, 
does Nova continue to be cloudy vm or does it go for the more production 
vm use case like oVirt and VMware?

"production VM" use case like oVirt or VMWare? I don't know what that 
means. You mean "a GUI-based VM management system"?

Pets vs Cattle. VMware/oVirt's primary focus is on being feature rich around around keeping pets alive/happy/responsive/etc. Live migration, cpu/memory hot plugging...

> Without strong orchestration of some kind on top the cloudy use case is also really hard with Nova. So we keep getting into this tug of war between people wanting VM's as a building blocks of cloud scale applications, and those that want Nova to be an oVirt/VMware replacement. Honestly, its not doing either use case great because it cant decide what to focus on.

No, that's not at all what I've been saying. I continue to see Nova (and 
other services in its layer of OpenStack) as a building block *for 
higher-level systems like Kubernetes or Heat*. There is a reason that 
Kubernetes has an OpenStack cloud provider plugin, and that plugin calls 
imperative Nova, Neutron, Cinder, and Keystone API calls.

KF> Yeah, sorry, that wasn't intended at you specifically. I've talked to the Nova team many times over the years and saw that back and forth happening. Some folks wanted more and more pet like features pushed in, and others wanted to optimize more for cattle. Its kind of in an uncomfortable middle ground now I think. No one ever defined specifically what was in/out of scope for Nova. Same issue as defining what OpenStack was, but just at the Nova level.

> oVirt is a better VMware alternative today then Nova is, having used it. It focuses specifically on the same use cases. Nova is better at being a cloud then oVirt and VMware. but lags behind Azure/AWS a lot when it comes to having apps self host on it. (progress is being made again finally. but its slow)

I'm not particularly interested in having Nova be a free VMWare 
replacement -- or in trying to be whatever oVirt has become.

KF> I mention it as I don't think all of Nova devs feel the same.

Some might see usefulness in these things, and as long as the feature 
requests to Nova don't cause Nova to become something other than 
low-level compute plumbing, I'm fine with that.

KF> I think it already has affected things. There are pettish features in it and there are now so many features in Nova that Nova is pushing back against new features that could help the low-level compute plumbing use case.

> While some people only ever consider running Kubernetes on top of a cloud, some of us realize maintaining both a cloud an a kubernetes is unnecessary and can greatly simplify things simply by running k8s on bare metal. This does then make it a competitor to Nova  as a platform for running workload on.

What percentage of Kubernetes users deploy on baremetal (and continue to 
deploy on baremetal in production as opposed to just toying around with it)?

KF> I do not have metrics. Just my own experience. I've seen several clusters now go from in vm to bare metal though as its very expensive to upgrade OpenStack and they really didn't need it anymore. Or the workload could be split between oVirt and Kubernetes on bare metal.

Long term though, if VM's become first class citizens on k8s, I could see k8s do both jobs easily.

 > As k8s gains more multitenancy features, this trend will continue to 
grow I think. OpenStack needs to be ready for when that becomes a thing.

OpenStack is already multi-tenant, being designed as such from day one. 
With the exception of Ironic, which uses Nova to enable multi-tenancy.

KF> Yes, but at a really high cost.

What specifically are you referring to "OpenStack needs to be ready"? 
Also, what specific parts of OpenStack are you referring to there?

KF> If something like k8s gains full multitenancy, then one of OpenStack's major remaining selling points vanishes and the difference in operator overhead becomes even more pronounced. "Why pay the operator overhead of OpenStack when you could just do K8s." OpenStack needs to pay down some of that operator related technical debt before k8s gains multitenancy and maybe vm support. What I mean here is, users care about deploying workload to their datacenter. They care that it makes it easy. I really could care less if it was containers or vms provided it worked well. The api k8s gives to do so is very smooth and getting smoother. On the openstack side, its been very bumpy and progressing very slowly. I fought for years to smooth it out and still the main road bumps are there.

> Heat is a good start for an orchestration system, but it is hamstrung by it being an optional component, by there still not being a way to download secrets to a vm securely from the secret store, by the secret store also being completely optional, etc. An app developer can't rely on any of it. :/ Heat is hamstrung by the lack of blessing so many other OpenStack services are. You can't fix it until you fix that fundamental brokenness in OpenStack.

I guess I just fundamentally disagree that having a monolithic 
all-things-for-all-users application architecture and feature set is 
something that OpenStack should be.

KF> I'm not necessarily arguing for that... More like how the Linux Kernel is monolithic and modular at the same time. You can customize it for all sorts of really strange hardware with and without large chunks of things. BUT, you also can just grab a prebuilt distro kernel and have it work on a lot of machines without issue.

KF> /me puts his app developer hat back on. As an app developer, you need a reliable base platform to target. If you can't rely on stuff like Orchestration always being there, you have 3 choices. You limit your customer base to only those that have the component installed (usually not acceptable), You write everything yourself (expensive) or if available, you develop on a platform that gives you more out of the box (what is happening with app devs moving quickly away from openstack to things like k8s. Sorry. Hard to say/hear.)

There is a *reason* that Kubernetes jettisoned all the cloud provider 
code from its core. The reason is because setting up that base stuff is 
*hard* and that work isn't germane to what Kubernetes is (a container 
orchestration system, not a datacenter resource management system).

KF> Disagree on that one. CSI is happening at the same time in the same way for almost the same reasons. They jettisoned it because:
 * They are following more and more the philosophy of eating their own dogfood. You should be able to deploy parts of Kubernetes with Kubernetes.
 * Their api has finally become robust enough that it is reasonable to self enhance that part.
 * They got to the point that other pressing issues were solved and they could tackle kicking it out of tree
 * Having it in tree was slowing down development/functionality. (This is the reason to make it a little bit harder for the ops in exchange for the clear benefits.)

KF> Like I said before, I'm not necessarily saying all code has to be wrapped up into a big ball and "plugins" are not a good thing. I think plugins are hugely important. But, I am arguing that fewer things are probably better for operators and splitting everything out into a million pieces without regard to that or having a good reason to do so is a kind of pre-optimization.

> Heat is also hamstrung being an orchestrator of existing API's by there being holes in the API's.

I agree there are some holes in some of the APIs. Happy to work on 
plugging those holes as long as the holes are properly identified as 
belonging to the correct API and are not simply a feature request what 
would expand the scope of lower-level plumbing services like Nova.

KF> That’s been the struggle I've always hit with OpenStack. Getting leads from each project involved to cooperatively decide where the heck an api belongs. IMO, the api belongs to OpenStack! Not to Nova or Neutron or Glance. OpenStacks project api's are riddled with cases where we couldn't pick the right place off the bat. Lots of litter in the Nova api in particular. Then it went the other way. Everyone's so afraid of adopting an api forever that they can't make a decision. K8s solved it by having a K8s api, followed by the code equivalents of nova/glance/etc picking things up as needed from the api server. Api is separated from what repo/service hosts the code.

KF> The real problem is OpenStack does not have an api. :/ it only has projects that have api's.

> Think of OpenStack like a game console. The moment you make a component optional and make it takes extra effort to obtain, few software developers target it and rarely does anyone one buy the addons it because there isn't software for it. Right now, just about everything in OpenStack is an addon. Thats a problem.

I don't have any game consoles nor do I develop software for them, so I 
don't really see the correlation here. That said, I'm 100% against a 
monolithic application approach, as I've mentioned before.

KF> Bad analogy then. Sorry. Hopefully the monolithic subject was addressed adequately above.

KF> Thanks,
KF> Kevin

Best,
-jay

> Thanks,
> Kevin
> 
> 
> ________________________________________
> From: Jay Pipes [jaypipes at gmail.com]
> Sent: Monday, July 02, 2018 4:13 PM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
> 
> On 06/27/2018 07:23 PM, Zane Bitter wrote:
>> On 27/06/18 07:55, Jay Pipes wrote:
>>> Above, I was saying that the scope of the *OpenStack* community is
>>> already too broad (IMHO). An example of projects that have made the
>>> *OpenStack* community too broad are purpose-built telco applications
>>> like Tacker [1] and Service Function Chaining. [2]
>>>
>>> I've also argued in the past that all distro- or vendor-specific
>>> deployment tools (Fuel, Triple-O, etc [3]) should live outside of
>>> OpenStack because these projects are more products and the relentless
>>> drive of vendor product management (rightfully) pushes the scope of
>>> these applications to gobble up more and more feature space that may
>>> or may not have anything to do with the core OpenStack mission (and
>>> have more to do with those companies' product roadmap).
>>
>> I'm still sad that we've never managed to come up with a single way to
>> install OpenStack. The amount of duplicated effort expended on that
>> problem is mind-boggling. At least we tried though. Excluding those
>> projects from the community would have just meant giving up from the
>> beginning.
> 
> You have to have motivation from vendors in order to achieve said single
> way of installing OpenStack. I gave up a long time ago on distros and
> vendors to get behind such an effort.
> 
> Where vendors see $$$, they will attempt to carve out value
> differentiation. And value differentiation leads to, well, differences,
> naturally.
> 
> And, despite what some might misguidedly think, Kubernetes has no single
> installation method. Their *official* setup/install page is here:
> 
> https://kubernetes.io/docs/setup/pick-right-solution/
> 
> It lists no fewer than *37* (!) different ways of installing Kubernetes,
> and I'm not even including anything listed in the "Custom Solutions"
> section.
> 
>> I think Thierry's new map, that collects installer services in a
>> separate bucket (that may eventually come with a separate git namespace)
>> is a helpful way of communicating to users what's happening without
>> forcing those projects outside of the community.
> 
> Sure, I agree the separate bucket is useful, particularly when paired
> with information that allows operators to know how stable and/or
> bleeding edge the code is expected to be -- you know, those "tags" that
> the TC spent time curating.
> 
>>>> So to answer your question:
>>>>
>>>> <jaypipes> zaneb: yeah... nobody I know who argues for a small stable
>>>> core (in Nova) has ever said there should be fewer higher layer
>>>> services.
>>>> <jaypipes> zaneb: I'm not entirely sure where you got that idea from.
>>>
>>> Note the emphasis on *Nova* above?
>>>
>>> Also note that when I've said that *OpenStack* should have a smaller
>>> mission and scope, that doesn't mean that higher-level services aren't
>>> necessary or wanted.
>>
>> Thank you for saying this, and could I please ask you to repeat this
>> disclaimer whenever you talk about a smaller scope for OpenStack.
> 
> Yes. I shall shout it from the highest mountains. [1]
> 
>> Because for those of us working on higher-level services it feels like
>> there has been a non-stop chorus (both inside and outside the project)
>> of people wanting to redefine OpenStack as something that doesn't
>> include us.
> 
> I've said in the past (on Twitter, can't find the link right now, but
> it's out there somewhere) something to the effect of "at some point,
> someone just needs to come out and say that OpenStack is, at its core,
> Nova, Neutron, Keystone, Glance and Cinder".
> 
> Perhaps this is what you were recollecting. I would use a different
> phrase nowadays to describe what I was thinking with the above.
> 
> I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are
> a definitive lower level of an OpenStack deployment. They represent a
> set of required integrated services that supply the most basic
> infrastructure for datacenter resource management when deploying OpenStack."
> 
> Note the difference in wording. Instead of saying "OpenStack is X", I'm
> saying "These particular services represent a specific layer of an
> OpenStack deployment".
> 
> Nowadays, I would further add something to the effect of "Depending on
> the particular use cases and workloads the OpenStack deployer wishes to
> promote, an additional layer of services provides workload orchestration
> and workflow management capabilities. This layer of services include
> Heat, Mistral, Tacker, Service Function Chaining, Murano, etc".
> 
> Does that provide you with some closure on this feeling of "non-stop
> chorus" of exclusion that you mentioned above?
> 
>> The reason I haven't dropped this discussion is because I really want to
>> know if _all_ of those people were actually talking about something else
>> (e.g. a smaller scope for Nova), or if it's just you. Because you and I
>> are in complete agreement that Nova has grown a lot of obscure
>> capabilities that make it fiendishly difficult to maintain, and that in
>> many cases might never have been requested if we'd had higher-level
>> tools that could meet the same use cases by composing simpler operations.
>>
>> IMHO some of the contributing factors to that were:
>>
>> * The aforementioned hostility from some quarters to the existence of
>> higher-level projects in OpenStack.
>> * The ongoing hostility of operators to deploying any projects outside
>> of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the
>> Barbican vs. Castellan debate, where we can't even correct one of
>> OpenStack's original sins and bake in a secret store - something k8s
>> managed from day one - because people don't want to install another ReST
>> API even over a backend that they'll already have to install anyway).
>> * The illegibility of public Nova interfaces to potential higher-level
>> tools.
> 
> I would like to point something else out here. Something that may not be
> pleasant to confront.
> 
> Heat's competition (for resources and mindshare) is Kubernetes, plain
> and simple.
> 
> Heat's competition is not other OpenStack projects.
> 
> Nova's competition is not Kubernetes (despite various people continuing
> to say that it is).
> 
> Nova is not an orchestration system. Never was and (as long as I'm
> kicking and screaming) never will be.
> 
> Nova's primary competition is:
> 
> * Stand-alone Ironic
> * oVirt and stand-alone virsh callers
> * Parts of VMWare vCenter [3]
> * MaaS in some respects
> * The *compute provisioning* parts of EC2, Azure, and GCP
> 
> This is why there is a Kubernetes OpenStack cloud provider plugin [4].
> 
> This plugin uses Nova [5] (which can potentially use Ironic), Cinder,
> Keystone and Neutron to deploy kubelets to act as nodes in a Kubernetes
> cluster and load balancer objects to act as the proxies that k8s itself
> uses when deploying Pods and Services.
> 
> Heat's architecture, template language and object constructs are in
> direct competition with Kubernetes' API and architecture, with the
> primary difference being a VM-centric [6] vs. a container-centric object
> model.
> 
> Heat's template language is similar to Helm's chart template YAML
> structure [7], and with Heat's evolution to the "convergence model",
> Heat's architecture actually got closer to Kubernetes' architecture:
> that of continually attempting to converge an observed state with a
> desired state.
> 
> So, what is Heat to do?
> 
> The hype and marketing machine is never-ending, I'm afraid. [8]
> 
> I'm not sure there's actually anything that can be done about this.
> Perhaps it is a fait accomplis that Kubernetes/Helm will/has become
> synonymous with "orchestration of things". Perhaps not. I'm not an
> oracle, unfortunately.
> 
> Maybe the only thing that Heat can do to fend off the coming doom is to
> make a case that Heat's performance, reliability, feature set or
> integration with OpenStack's other services make it a better candidate
> for orchestrating virtual machine or baremetal workloads on an OpenStack
> deployment than Kubernetes is.
> 
> Sorry to be the bearer of bad news,
> -jay
> 
> [1] I live in Florida, though, which has no mountains. But, when I
> visit, say, North Carolina, I shall certainly shout it from their mountains.
> 
> [2] some would also say Castellan, Ironic and Designate belong here.
> 
> [3] Though VMWare is still trying to be everything that certain IT
> administrators ever needed, including orchestration, backup services,
> block storage pooling, high availability, quota management, etc etc
> 
> [4]
> https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack
> 
> [5]
> https://github.com/kubernetes/kubernetes/blob/92b81114f43f3ca74988194406957a5d1ffd1c5d/pkg/cloudprovider/providers/openstack/openstack.go#L377
> 
> [6] The fact that Heat started as a CloudFormation API clone gave it its
> VM-centricity.
> 
> [7]
> https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/index.md
> 
> [8] The Kubernetes' machine has essentially decimated all the other
> "orchestration of things" projects' resources and mindshare, including a
> number of them that were very well architected, well coded, and well
> documented:
> 
> * Mesos with Marathon/Aurora
> * Rancher
> * OpenShift (you know, the original, original one...)
> * Nomad
> * Docker Swarm/Compose
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list