[openstack-dev] [ptg] Simplification in OpenStack
Boris Pavlovic
boris at pavlovic.me
Thu Sep 14 17:01:22 UTC 2017
Jay,
OK, I'll bite.
This doesn't sound like a constructive discussion. Bye Bye.
Best regards,
Boris Pavlovic
On Thu, Sep 14, 2017 at 8:50 AM, Jay Pipes <jaypipes at gmail.com> wrote:
> OK, I'll bite.
>
> On 09/13/2017 08:56 PM, Boris Pavlovic wrote:
>
>> Jay,
>>
>> All that you say exactly explains the reason why more and more companies
>> are leaving OpenStack.
>>
>
> All that I say? The majority of what I was "saying" was actually asking
> you to back up your statements with actual proof points instead of making
> wild conjectures.
>
> Companies and actually end users care only about their things and how can
>> they get their job done. They want thing that they can run and support
>> easily and that resolves their problems.
>>
>
> No disagreement from me. That said, I fail to see what the above statement
> has to do with anything I wrote.
>
> They initially think that it's a good idea to take a OpenStack as a
>> Framework and build sort of product on top of it because it's so open and
>> large and everybody uses...
>>
>
> End users of OpenStack don't "build sort of product on top". End users of
> OpenStack call APIs or use Horizon to launch VMs, create networks, volumes,
> and whatever else those end users need for their own use cases.
>
> Soon they understand that OpenStack has very complicated operations
>> because it's not designed to be a product but rather framework and that the
>> complexity of running OpenStack is similar to development in house solution
>> and as time is spend they have only few options: move to public cloud or
>> some other private cloud solution...
>>
>
> Deployers of OpenStack use the method of installing and configuring
> OpenStack that matches best their cultural fit, experience and level of
> comfort with underlying technologies and vendors (packages vs. source vs.
> images, using a vendor distribution vs. going it alone, Chef vs. Puppet vs.
> Ansible vs. SaltStack vs. Terraform, etc). The way they configure OpenStack
> services is entirely dependent on the use cases they wish to support for
> their end users. And, to repeat myself, there is NO SINGLE USE CASE for
> infrastructure services like OpenStack. Therefore there is zero chance for
> a "standard deployment" of OpenStack becoming a reality.
>
> Just like there are myriad ways of deploying and configuring OpenStack,
> there are myriad ways of deploying and configuring k8s. Why? Because
> deploying and configuring highly distributed systems is a hard problem to
> solve. And maintaining and operating those systems is an even harder
> problem to solve.
>
> We as a community can continue saying that the current OpenStack approach
>> is the best
>>
>
> Nobody is saying that the current OpenStack approach is the best. I
> certainly have never said this. All that I have asked is that you actually
> back up your statements with proof points that demonstrate how and why a
> different approach to building software will lead to specific improvements
> in quality or user experience.
>
> and keep loosing customers/users/community, or change something
>> drastically, like bring technical leadership to OpenStack Foundation
>> that is going to act like benevolent dictator that focuses OpenStack
>> effort on shrinking uses cases, redesigning architecture and moving
>> to the right direction...
>>
>
> What *specifically* is the "right direction" for OpenStack to take?
> Please, as I asked you in the original response, provide actual details
> other than "we should have a monolithic application". Provide an argument
> as to how and why *your* direction is "right" for every user of OpenStack.
>
> When you say "technical leadership", what specifically are you wanting to
> see?
>
>
>> I know this all sounds like a big change, but let's be honest current
>> situation doesn't look healthy...
>> By the way, almost all successful projects in open source have benevolent
>> dictator and everybody is OK with that's how things works...
>>
>
> Who is the benevolent dictator of k8s? Who is the benevolent dictator of
> MySQL? Of PostgreSQL? Of etcd?
>
> You have a particularly myopic view of what "successful" is for open
> source, IMHO.
>
> Awesome news. I will keep this in mind when users (like GoDaddy) ask
>> Nova to never break anything ever and keep behaviour like scheduler
>> retries that represent giant technical debt.
>>
>> I am writing here on my behalf (using my personal email, if you haven't
>> seen), are we actually Open Source? or Enterprise Source?
>>
>> More over I don't think that what you say is going to be an issue for
>> GoDaddy, at least soon, because we still can't upgrade, because it's NP
>> complete problem (even if you run just core projects), which is what my
>> email was about, and I saw the same stories in bunch of other companies.....
>>
>
> You continue to speak in hyperbole and generalizations. What
> *specifically* about your recommendations will improve the upgrade ability
> and story for OpenStack?
>
> Yes, let's definitely go the opposite direction of microservices and
>> loosely coupled domains which is the best practices of software
>> development over the last two decades. While we're at it, let's
>> rewrite OpenStack projects in COBOL.
>>
>> I really don't want to answer on this provocation, because it shifts the
>> focus from major topic. But I really can't stop myself ;)
>>
>> - There is no sliver bullet in programming. For example, would Git or
>> Linux be better if it was written using microservices approach?
>>
>
> I am fully aware that there is no silver bullet in programming. That was
> actually my entire point. It is you that continues to espouse various
> opinions that imply that there *is* some sort of silver bullet solution to
> OpenStack's problems.
>
> You imply that monolithic architecture will magically solve problems
> inherent in highly distributed systems.
>
> You imply that having a benevolent dictator will magically result in a
> productized infrastructure platform that meets everyone's needs.
>
> And you imply that using a single deployment/packaging solution (Docker)
> will magically solve all issues with upgrades.
>
> Please answer the questions in my original response with some specific
> details.
>
> Thanks
> -jay
>
> Best regards,
>> Boris Pavlovic
>>
>>
>> On Wed, Sep 13, 2017 at 10:44 AM, Jay Pipes <jaypipes at gmail.com <mailto:
>> jaypipes at gmail.com>> wrote:
>>
>> On 09/12/2017 06:53 PM, Boris Pavlovic wrote:
>>
>> Mike,
>>
>> Great intiative, unfortunately I wasn't able to attend it,
>> however I have some thoughts...
>> You can't simplify OpenStack just by fixing few issues that are
>> described in the etherpad mostly..
>>
>> TC should work on shrinking the OpenStack use cases and moving
>> towards the product (box) complete solution instead of pieces of
>> bunch barely related things..
>>
>>
>> OpenStack is not a product. It's a collection of projects that
>> represent a toolkit for various cloud-computing functionality.
>>
>> *Simple things to improve: *
>> /This is going to allow community to work together, and actually
>> get feedback in standard way, and incrementally improve quality. /
>>
>> 1) There should be one and only one:
>> 1.1) deployment/packaging(may be docker) upgrade mechanism used
>> by everybody
>>
>>
>> Good luck with that :) The likelihood of the deployer/packager
>> community agreeing on a single solution is zero.
>>
>> 1.2) monitoring/logging/tracing mechanism used by everybody
>>
>>
>> Also close to zero chance of agreeing on a single solution. Better
>> to focus instead on ensuring various service projects are
>> monitorable and transparent.
>>
>> 1.3) way to configure all services (e.g. k8 etcd way)
>>
>>
>> Are you referring to the way to configure k8s services or the way to
>> configure/setup an *application* that is running on k8s? If the
>> former, then there is *not* a single way of configuring k8s
>> services. If the latter, there isn't a single way of configuring
>> that either. In fact, despite Helm being a popular new entrant to
>> the k8s application package format discussion, k8s itself is
>> decidedly *not* opinionated about how an application is configured.
>> Use a CMDB, use Helm, use env variables, use confd, use whatever.
>> k8s doesn't care.
>>
>> 2) Projects must have standardize interface that allows these
>> projects to use them in same way.
>>
>>
>> Give examples of services that communicate over *non-standard*
>> interfaces. I don't know of any.
>>
>> 3) Testing & R&D should be performed only against this standard
>> deployment
>>
>>
>> Sorry, this is laughable. There will never be a standard deployment
>> because there are infinite use cases that infrastructure supports.
>> *Your* definition of what works for GoDaddy is decidedly different
>> from someone else's definition of what works for them.
>>
>> *Hard things to improve: *
>>
>> OpenStack projects were split in far from ideal way, which leads
>> to bunch of gaps that we have now:
>> 1.1) Code & functional duplications: Quotas, Schedulers,
>> Reservations, Health checks, Loggign, Tracing, ....
>>
>>
>> There is certainly code duplication in some areas, yes.
>>
>> 1.2) Non optimal workflows (booting VM takes 400 DB requests)
>> because data is stored in Cinder,Nova,Neutron....
>>
>>
>> Sorry, I call bullshit on this. It does not take 400 DB requests to
>> boot a VM. Also: the DB is not at all the bottleneck in the VM
>> launch process. You've been saying it is for years with no
>> justification to back you up. Pointing to a Rally scenario that
>> doesn't reflect a real-world usage of OpenStack services isn't useful.
>>
>> 1.3) Lack of resources (as every project is doing again and
>> again same work about same parts)
>>
>>
>> Provide specific examples please.
>>
>> What we can do:
>>
>> *1) Simplify internal communication *
>> 1.1) Instead of AMQP for internal communication inside projects
>> use just HTTP, load balancing & retries.
>>
>>
>> Prove to me that this would solve a problem. First describe what the
>> problem is, then show me that using AMQP is the source of that
>> problem, then show me that using HTTP requests would solve that
>> problem.
>>
>> *2) Use API Gateway pattern *
>> 3.1) Provide to use high level API one IP address with one client
>> 3.2) Allows to significant reduce load on Keystone because
>> tokens are checked only in API gateway
>> 3.3) Simplifies communication between projects (they are now in
>> trusted network, no need to check token)
>>
>>
>> Why is this a problem for OpenStack projects to deal with? If you
>> want a single IP address for all APIs that your users consume, then
>> simply deploy all the public-facing services on a single set of web
>> servers and make each service's root endpoint be a subresource on
>> the root IP/DNS name.
>>
>> *3) Fix the OpenStack split *
>> 3.1) Move common functionality to separated internal services:
>> Scheduling, Logging, Monitoring, Tracing, Quotas, Reservations
>> (it would be even better if this thing would have more or less
>> monolithic architecture)
>>
>>
>> Yes, let's definitely go the opposite direction of microservices and
>> loosely coupled domains which is the best practices of software
>> development over the last two decades. While we're at it, let's
>> rewrite OpenStack projects in COBOL.
>>
>> 3.2) Somehow deal with defragmentation of resources e.g. VM
>> Volumes and Networks data which is heavily connected.
>>
>>
>> How are these things connected?
>>
>> *4) Don't be afraid to break things*
>> Maybe it's time for OpenStack 2:
>>
>> * In any case most of people provide API on top of OpenStack
>> for usage
>> * In any case there is no standard and easy way to upgrade
>> So basically we are not losing anything even if we do not
>> backward compatible changes and rethink completely architecture
>> and API.
>>
>>
>> Awesome news. I will keep this in mind when users (like GoDaddy) ask
>> Nova to never break anything ever and keep behaviour like scheduler
>> retries that represent giant technical debt.
>>
>> -jay
>>
>> I know this sounds like science fiction, but I believe community
>> will appreciate steps in this direction...
>>
>>
>> Best regards,
>> Boris Pavlovic
>>
>> On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez <thingee at gmail.com
>> <mailto:thingee at gmail.com> <mailto:thingee at gmail.com
>> <mailto:thingee at gmail.com>>> wrote:
>>
>> Hey all,
>>
>> The session is over. I’m hanging near registration if
>> anyone wants to
>> discuss things. Shout out to John for coming by on
>> discussions with
>> simplifying dependencies. I welcome more packagers to join
>> the
>> discussion.
>>
>> https://etherpad.openstack.org/p/simplifying-os
>> <https://etherpad.openstack.org/p/simplifying-os>
>> <https://etherpad.openstack.org/p/simplifying-os
>> <https://etherpad.openstack.org/p/simplifying-os>>
>>
>> —
>> Mike Perez
>>
>>
>> On September 12, 2017 at 11:45:05, Mike Perez
>> (thingee at gmail.com <mailto:thingee at gmail.com>
>> <mailto:thingee at gmail.com <mailto:thingee at gmail.com>>)
>> wrote:
>> > Hey all,
>> >
>> > Back in a joint meeting with the TC, UC, Foundation and
>> The Board
>> it was decided as an area
>> > of OpenStack to focus was Simplifying OpenStack. This
>> intentionally was very broad
>> > so the community can kick start the conversation and
>> help tackle
>> some broad feedback
>> > we get.
>> >
>> > Unfortunately yesterday there was a low turn out in the
>> Simplification room. A group
>> > of people from the Swift team, Kevin Fox and Swimingly
>> were nice
>> enough to start the conversation
>> > and give some feedback. You can see our initial ether
>> pad work here:
>> >
>> > https://etherpad.openstack.org/p/simplifying-os
>> <https://etherpad.openstack.org/p/simplifying-os>
>> <https://etherpad.openstack.org/p/simplifying-os
>> <https://etherpad.openstack.org/p/simplifying-os>>
>> >
>> > There are efforts happening everyday helping with this
>> goal, and
>> our team has made some
>> > documented improvements that can be found in our report
>> to the
>> board within the ether
>> > pad. I would like to take a step back with this
>> opportunity to
>> have in person discussions
>> > for us to identify what are the area of simplifying that
>> are
>> worthwhile. I’m taking a break
>> > from the room at the moment for lunch, but I encourage
>> people at
>> 13:30 local time to meet
>> > at the simplification room level b in the big thompson
>> room.
>> Thank you!
>> >
>> > —
>> > Mike Perez
>>
>> ______________________________
>> ____________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request@lists.openstack.org?subject:un
>> subscribe>
>> <http://OpenStack-dev-request@
>> lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request@lists.openstack.org?subject:un
>> subscribe>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>> <http://lists.openstack.org/cg
>> i-bin/mailman/listinfo/openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
>> ck-dev>>
>>
>>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request@lists.openstack.org?subject:un
>> subscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170914/7e4670b6/attachment.html>
More information about the OpenStack-dev
mailing list