[openstack-dev] [trove][all][tc] A proposal to rearchitect Trove
Fox, Kevin M
Kevin.Fox at pnnl.gov
Thu Jun 22 20:33:13 UTC 2017
No, I'm not necessarily advocating a monolithic approach.
I'm saying that they have decided to start with functionality and accept whats needed to get the task done. Theres not really such strong walls between the various functionality, rbac/secrets/kublet/etc. They don't spawn off a whole new project just to add functionality. they do so only when needed. They also don't balk at one feature depending on another.
rbac's important, so they implemented it. ssl cert management was important. so they added that. adding a feature that restricts secret downloads only to the physical nodes need them, could then reuse the rbac system and ssl cert management.
Their sigs are more oriented to features/functionality (or catagories there of), not as much specific components. We need to do X. X may involve changes to components A and B.
OpenStack now tends to start with A and B and we try and work backwards towards implementing X, which is hard due to the strong walls and unclear ownership of the feature. And the general solution has been to try and make C but not commit to C being in the core so users cant depend on it which hasn't proven to be a very successful pattern.
Your right, they are breaking up their code base as needed, like nova did. I'm coming around to that being a pretty good approach to some things. starting things is simpler, and if it ends up not needing its own whole project, then it doesn't get one. if it needs one, then it gets one. Its not by default, start whole new project with db user, db schema, api, scheduler, etc. And the project might not end up with daemons split up in exactly the way you would expect if you prepoptomized breaking off a project not knowing exactly how it might integrate with everything else.
Maybe the porcelain api that's been discussed for a while is part of the solution. initial stuff can prototyped/start there and break off as needed to separate projects and moved around without the user needing to know where it ends up.
Your right that OpenStack's scope is much grater. and think that the commons are even more important in that case. If it doesn't have a solid base, every project has to re-implement its own base. That takes a huge amount of manpower all around. Its not sustainable.
I guess we've gotten pretty far away from discussing Trove at this point.
Thanks,
Kevin
________________________________________
From: Jay Pipes [jaypipes at gmail.com]
Sent: Thursday, June 22, 2017 10:05 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove
On 06/22/2017 11:59 AM, Fox, Kevin M wrote:
> My $0.02.
>
> That view of dependencies is why Kubernetes development is outpacing OpenStacks and some users are leaving IMO. Not trying to be mean here but trying to shine some light on this issue.
>
> Kubernetes at its core has essentially something kind of equivalent to keystone (k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), heat with convergence (deployments/daemonsets/etc), barbican (secrets), designate (kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont have to work hard to get all of it, users can assume its all there, and devs don't have many silo's to cross to implement features that touch multiple pieces.
I think it's kind of hysterical that you're advocating a monolithic
approach when the thing you're advocating (k8s) is all about enabling
non-monolithic microservices architectures.
Look, the fact of the matter is that OpenStack's mission is larger than
that of Kubernetes. And to say that "Ops don't have to work hard" to get
and maintain a Kubernetes deployment (which, frankly, tends to be dozens
of Kubernetes deployments, one for each tenant/project/namespace) is
completely glossing over the fact that by abstracting away the
infrastructure (k8s' "cloud provider" concept), Kubernetes developers
simply get to ignore some of the hardest and trickiest parts of operations.
So, let's try to compare apples to apples, shall we?
It sounds like the end goal that you're advocating -- more than anything
else -- is an easy-to-install package of OpenStack services that
provides a Kubernetes-like experience for application developers.
I 100% agree with that goal. 100%.
But pulling Neutron, Cinder, Keystone, Designate, Barbican, and Octavia
back into Nova is not the way to do that. You're trying to solve a
packaging and installation problem with a code structure solution.
In fact, if you look at the Kubernetes development community, you see
the *opposite* direction being taken: they have broken out and are
actively breaking out large pieces of the Kubernetes repository/codebase
into separate repositories and addons/plugins. And this is being done to
*accelerate* development of Kubernetes in very much the same way that
splitting services out of Nova was done to accelerate the development of
those various pieces of infrastructure code.
> This core functionality being combined has allowed them to land features that are really important to users but has proven difficult for OpenStack to do because of the silo's. OpenStack's general pattern has been, stand up a new service for new feature, then no one wants to depend on it so its ignored and each silo reimplements a lesser version of it themselves.
I disagree. I believe the reason Kubernetes is able to land features
that are "really important to users" is primarily due to the following
reasons:
1) The Kubernetes technical leadership strongly resists pressure from
vendors to add yet-another-specialized-feature to the codebase. This
ability to say "No" pays off in spades with regards to stability and focus.
2) The mission of Kubernetes is much smaller than OpenStack. If the
OpenStack community were able to say "OpenStack is a container
orchestration system", and not "OpenStack is a ubiquitous open source
cloud operating system", we'd probably be able to deliver features in a
more focused fashion.
> The OpenStack commons then continues to suffer.
>
> We need to stop this destructive cycle.
>
> OpenStack needs to figure out how to increase its commons. Both internally and externally. etcd as a common service was a step in the right direction.
>
> I think k8s needs to be another common service all the others can rely on. That could greatly simplify the rest of the OpenStack projects as a lot of its functionality no longer has to be implemented in each project.
I don't disagree with the goal of being able to rely on Kubernetes for
many things. But relying on Kubernetes doesn't solve the "I want some
easy-to-install infrastructure" problem. Nor does it solve the types of
advanced networking scenarios that the NFV community requires.
> We also need a way to break down the silo walls and allow more cross project collaboration for features. I fear the new push for letting projects run standalone will make this worse, not better, further fracturing OpenStack.
Perhaps you are referring to me with the above? As I said on Twitter,
"Make your #OpenStack project usable by and useful for things outside of
the OpenStack ecosystem. Fewer deps. Do one thing well. Solid APIs."
I don't think that the above leads to "further fracturing OpenStack". I
think it leads to solid, reusable components.
Best,
-jay
> Thanks,
> Kevin
> ________________________________________
> From: Thierry Carrez [thierry at openstack.org]
> Sent: Thursday, June 22, 2017 12:58 AM
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove
>
> Fox, Kevin M wrote:
>> [...]
>> If you build a Tessmaster clone just to do mariadb, then you share nothing with the other communities and have to reinvent the wheel, yet again. Operators load increases because the tool doesn't function like other tools.
>>
>> If you rely on a container orchestration engine that's already cross cloud that can be easily deployed by user or cloud operator, and fill in the gaps with what Trove wants to support, easy management of db's, you get to reuse a lot of the commons and the users slight increase in investment in dealing with the bit of extra plumbing in there allows other things to also be easily added to their cluster. Its very rare that a user would need to deploy/manage only a database. The net load on the operator decreases, not increases.
>
> I think the user-side tool could totally deploy on Kubernetes clusters
> -- if that was the only possible target that would make it a Kubernetes
> tool more than an open infrastructure tool, but that's definitely a
> possibility. I'm not sure work is needed there though, there are already
> tools (or charts) doing that ?
>
> For a server-side approach where you want to provide a DB-provisioning
> API, I fear that making the functionality depend on K8s would make
> TroveV2/Hoard would not only depend on Heat and Nova, but also depend on
> something that would deploy a Kubernetes cluster (Magnum?), which would
> likely hurt its adoption (and reusability in simpler setups). Since
> databases would just work perfectly well in VMs, it feels like a
> gratuitous dependency addition ?
>
> We generally need to be very careful about creating dependencies between
> OpenStack projects. On one side there are base services (like Keystone)
> that we said it was alright to depend on, but depending on anything else
> is likely to reduce adoption. Magnum adoption suffers from its
> dependency on Heat. If Heat starts depending on Zaqar, we make the
> problem worse. I understand it's a hard trade-off: you want to reuse
> functionality rather than reinvent it in every project... we just need
> to recognize the cost of doing that.
>
> --
> Thierry Carrez (ttx)
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list