<div dir="ltr"><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><span style="font-family:courier new,monospace"></span><span style="font-family:courier new,monospace"></span><span style="font-family:courier new,monospace"></span><span style="font-family:courier new,monospace"></span><span style="font-family:courier new,monospace"></span><br><br></div></div>
<br><div class="gmail_quote">On Thu, Jun 22, 2017 at 4:38 PM, Zane Bitter <span dir="ltr"><<a href="mailto:zbitter@redhat.com" target="_blank">zbitter@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">(Top posting. Deal with it ;)<br>
<br></blockquote><div><br><div style="font-family:courier new,monospace;display:inline" class="gmail_default">Yes, please keep the conversation going; top posting is fine, the k8s issue isn't 'off topic'.<br></div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
You're both right!<br>
<br>
Making OpenStack monolithic is not the answer. In fact, rearranging Git repos has nothing to do with the answer.<br>
<br>
But back in the day we had a process (incubation) for adding stuff to OpenStack that it made sense to depend on being there. It was a highly imperfect process. We got rid of that process with the big tent reform, but didn't really replace it with anything at all. Tags never evolved into a replacement as I hoped they would.<br>
<br>
So now we have a bunch of things that are integral to building a "Kubernetes-like experience for application developers" - secret storage, DNS, load balancing, asynchronous messaging - that exist but are not in most clouds. (Not to mention others like fine-grained authorisation control that are completely MIA.)<br>
<br>
Instead of trying to drive adoption of all of that stuff, we are either just giving up or reinventing bits of it, badly, in multiple places. The biggest enemy of "do one thing and do it well" is when a thing that you need to do was chosen by a project in another silo as their "one thing", but you don't want to just depend on that project because it's not widely adopted.<br>
<br>
I'm not saying this is an easy problem. It's something that the proprietary public cloud providers don't face: if you have only one cloud then you can just design everything to be as tightly integrated as it needs to be. When you have multiple clouds and the components are optional you have to do a bit more work. But if those components are rarely used at all then you lose the feedback loop that helps create a single polished implementation and everything else has to choose between not integrating, or implementing just the bits it needs itself so that whatever smaller feedback loop does manage to form, the benefits are contained entirely within the silo. OpenStack is arguably the only cloud project that has to deal with this. (Azure is also going into the same market, but they already have the feedback loop set up because they run their own public cloud built from the components.) Figuring out how to empower the community to solve this problem is our #1 governance concern IMHO.<br>
<br>
In my view, one of the keys is to stop thinking of OpenStack as an abstraction layer over a bunch of vendor technologies. If you think of Nova as an abstraction layer over libvirt/Xen/HyperV, and Keystone as an abstraction layer over LDAP/ActiveDirectory, and Cinder/Neutron as an abstraction layer over a bunch of storage/network vendors, then two things will happen. The first is unrelenting "pressure from vendors to add yet-another-specialized-featur<wbr>e to the codebase" that you won't be able to push back against because you can't point to a competing vision. And the second is that you will never build a integrated, application-centric cloud, because the integration bit needs to happen at the layer above the backends we are abstracting.<br>
<br>
We need to think of those things as the compute, authn, block storage and networking components of an integrated, application-centric cloud. And to remember that *by no means* are those the only components it will need - "The mission of Kubernetes is much smaller than OpenStack"; there's a lot we need to do.<br>
<br>
So no, the strength of k8s isn't in having a monolithic git repo (and I don't think that's what Kevin was suggesting). That's actually a slow-motion train-wreck waiting to happen. Its strength is being able to do all of this stuff and still be easy enough to install, so that there's no question of trying to build bits of it without relying on shared primitives.<br>
<br>
cheers,<br>
Zane.<br>
<br>
On 22/06/17 13:05, Jay Pipes wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 06/22/2017 11:59 AM, Fox, Kevin M wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
My $0.02.<br>
<br>
That view of dependencies is why Kubernetes development is outpacing OpenStacks and some users are leaving IMO. Not trying to be mean here but trying to shine some light on this issue.<br>
<br>
Kubernetes at its core has essentially something kind of equivalent to keystone (k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), heat with convergence (deployments/daemonsets/etc), barbican (secrets), designate (kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont have to work hard to get all of it, users can assume its all there, and devs don't have many silo's to cross to implement features that touch multiple pieces.<br>
</blockquote>
<br>
I think it's kind of hysterical that you're advocating a monolithic approach when the thing you're advocating (k8s) is all about enabling non-monolithic microservices architectures.<br>
<br>
Look, the fact of the matter is that OpenStack's mission is larger than that of Kubernetes. And to say that "Ops don't have to work hard" to get and maintain a Kubernetes deployment (which, frankly, tends to be dozens of Kubernetes deployments, one for each tenant/project/namespace) is completely glossing over the fact that by abstracting away the infrastructure (k8s' "cloud provider" concept), Kubernetes developers simply get to ignore some of the hardest and trickiest parts of operations.<br>
<br>
So, let's try to compare apples to apples, shall we?<br>
<br>
It sounds like the end goal that you're advocating -- more than anything else -- is an easy-to-install package of OpenStack services that provides a Kubernetes-like experience for application developers.<br>
<br>
I 100% agree with that goal. 100%.<br>
<br>
But pulling Neutron, Cinder, Keystone, Designate, Barbican, and Octavia back into Nova is not the way to do that. You're trying to solve a packaging and installation problem with a code structure solution.<br>
<br>
In fact, if you look at the Kubernetes development community, you see the *opposite* direction being taken: they have broken out and are actively breaking out large pieces of the Kubernetes repository/codebase into separate repositories and addons/plugins. And this is being done to *accelerate* development of Kubernetes in very much the same way that splitting services out of Nova was done to accelerate the development of those various pieces of infrastructure code.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
This core functionality being combined has allowed them to land features that are really important to users but has proven difficult for OpenStack to do because of the silo's. OpenStack's general pattern has been, stand up a new service for new feature, then no one wants to depend on it so its ignored and each silo reimplements a lesser version of it themselves.<br>
</blockquote>
<br>
I disagree. I believe the reason Kubernetes is able to land features that are "really important to users" is primarily due to the following reasons:<br>
<br>
1) The Kubernetes technical leadership strongly resists pressure from vendors to add yet-another-specialized-featur<wbr>e to the codebase. This ability to say "No" pays off in spades with regards to stability and focus.<br>
<br>
2) The mission of Kubernetes is much smaller than OpenStack. If the OpenStack community were able to say "OpenStack is a container orchestration system", and not "OpenStack is a ubiquitous open source cloud operating system", we'd probably be able to deliver features in a more focused fashion.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The OpenStack commons then continues to suffer.<br>
<br>
We need to stop this destructive cycle.<br>
<br>
OpenStack needs to figure out how to increase its commons. Both internally and externally. etcd as a common service was a step in the right direction.<br>
<br>
I think k8s needs to be another common service all the others can rely on. That could greatly simplify the rest of the OpenStack projects as a lot of its functionality no longer has to be implemented in each project.<br>
</blockquote>
<br>
I don't disagree with the goal of being able to rely on Kubernetes for many things. But relying on Kubernetes doesn't solve the "I want some easy-to-install infrastructure" problem. Nor does it solve the types of advanced networking scenarios that the NFV community requires.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
We also need a way to break down the silo walls and allow more cross project collaboration for features. I fear the new push for letting projects run standalone will make this worse, not better, further fracturing OpenStack.<br>
</blockquote>
<br>
Perhaps you are referring to me with the above? As I said on Twitter, "Make your #OpenStack project usable by and useful for things outside of the OpenStack ecosystem. Fewer deps. Do one thing well. Solid APIs."<br>
<br>
I don't think that the above leads to "further fracturing OpenStack". I think it leads to solid, reusable components.<br>
<br>
Best,<br>
-jay<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks,<br>
Kevin<br>
______________________________<wbr>__________<br>
From: Thierry Carrez [<a href="mailto:thierry@openstack.org" target="_blank">thierry@openstack.org</a>]<br>
Sent: Thursday, June 22, 2017 12:58 AM<br>
To: <a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.<wbr>org</a><br>
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove<br>
<br>
Fox, Kevin M wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
[...]<br>
If you build a Tessmaster clone just to do mariadb, then you share nothing with the other communities and have to reinvent the wheel, yet again. Operators load increases because the tool doesn't function like other tools.<br>
<br>
If you rely on a container orchestration engine that's already cross cloud that can be easily deployed by user or cloud operator, and fill in the gaps with what Trove wants to support, easy management of db's, you get to reuse a lot of the commons and the users slight increase in investment in dealing with the bit of extra plumbing in there allows other things to also be easily added to their cluster. Its very rare that a user would need to deploy/manage only a database. The net load on the operator decreases, not increases.<br>
</blockquote>
<br>
I think the user-side tool could totally deploy on Kubernetes clusters<br>
-- if that was the only possible target that would make it a Kubernetes<br>
tool more than an open infrastructure tool, but that's definitely a<br>
possibility. I'm not sure work is needed there though, there are already<br>
tools (or charts) doing that ?<br>
<br>
For a server-side approach where you want to provide a DB-provisioning<br>
API, I fear that making the functionality depend on K8s would make<br>
TroveV2/Hoard would not only depend on Heat and Nova, but also depend on<br>
something that would deploy a Kubernetes cluster (Magnum?), which would<br>
likely hurt its adoption (and reusability in simpler setups). Since<br>
databases would just work perfectly well in VMs, it feels like a<br>
gratuitous dependency addition ?<br>
<br>
We generally need to be very careful about creating dependencies between<br>
OpenStack projects. On one side there are base services (like Keystone)<br>
that we said it was alright to depend on, but depending on anything else<br>
is likely to reduce adoption. Magnum adoption suffers from its<br>
dependency on Heat. If Heat starts depending on Zaqar, we make the<br>
problem worse. I understand it's a hard trade-off: you want to reuse<br>
functionality rather than reinvent it in every project... we just need<br>
to recognize the cost of doing that.<br>
<br>
-- <br>
Thierry Carrez (ttx)<br>
<br>
______________________________<wbr>______________________________<wbr>______________ <br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
<br>
______________________________<wbr>______________________________<wbr>______________ <br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
<br>
</blockquote>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</blockquote>
<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</blockquote></div><br></div></div>