[openstack-dev] [all][Kingbird][Heat][Glance]Multi-Region Orchestrator

Goutham Pratapa pratapagoutham at gmail.com
Mon Feb 12 06:44:06 UTC 2018


Hi, Zane,

Sorry for the late reply I was on leave for a couple of days.

Firstly, Thanks for the clear in detail analysis and suggestions on quotas
and resources-management it really means a lot to us :).

Secondly, these are the use-cases which kingbird is mainly developed for.

*OUR USE-CASES QUOTA-MANAGEMENT:*

1. Admin must have a global view of all quotas to all tenants across all
the regions
2. Admin can periodically balance the quotas (we have a formula using which
we do this balancing ) across regions
3. Admin can update, Delete quotas for tenants
4. Admin can sync quotas for all tenants so that the quotas will be updated
in all regions.

*USE-CASES FOR RESOURCE-MANAGEMENT:*
1. Resources which are required to boot up a VM in One region should be
accessible in other target-regions
     In the process, Kingbird has support for the following
    a) Sync/Replicate existing Nova-Keypairs
    b) Sync/Replicate existing Glance-Images
    c) Sync/Replicate existing Nova-Flavors.(Only admin can sync these.)

2. User who has a VM in one region should have the ease or possibility to
have a replica of the same vm in target-region(s)
   a) It can be a snapshot of the already booted-up VM or with the same
qcow2 image.

*GENERIC USE-CASES*

1.  Automation scripts for kingbird in
    -ansible,
    -salt
    -puppet.
2. Add SSL support to kingbird
3. Resource management in Kingbird-dashboard.
4. Kingbird in a docker
5. Add Kingbird into Kolla.

On Fri, Feb 9, 2018 at 12:47 AM, Zane Bitter <zbitter at redhat.com> wrote:

> On 07/02/18 12:24, Goutham Pratapa wrote:
>
>>      >Yes as you said it can be interpreted as a tool that can
>>     orchestrate multiple-regions.
>>
>
> Actually from your additional information I'm now getting the impression
> that you are, in fact, positioning this as a partial competitor to Heat.

>To some extent yes, Till now we have focused on resource-synchronization
> and quota-balancing for various tenants across multiple-regions. But in the
> coming cycle we want to enter the orchestration game.
>

>     Just to be sure does openstack already has project which can
>>     replicate the resources and orchestrate???
>>
>
> OpenStack has an orchestration service - Heat - and it allows you to do
> orchestration across multiple regions by creating a nested Stack in an
> arbitrary region as a resource in a Heat Stack.[1]
>
> Heat includes the ability to create Nova keypairs[2] and even, for those
> users with sufficient privileges, flavors[3] and quotas[4][5][6]. (It used
> to be able to create Glance images as well, but this was deprecated because
> it is not feasible using the Glance v2 API.)
>
> [1] https://docs.openstack.org/heat/latest/template_guide/openst
> ack.html#OS::Heat::Stack
> [2] https://docs.openstack.org/heat/latest/template_guide/openst
> ack.html#OS::Nova::KeyPair
> [3] https://docs.openstack.org/heat/latest/template_guide/openst
> ack.html#OS::Nova::Flavor
> [4] https://docs.openstack.org/heat/latest/template_guide/openst
> ack.html#OS::Nova::Quota
> [5] https://docs.openstack.org/heat/latest/template_guide/openst
> ack.html#OS::Cinder::Quota
> [6] https://docs.openstack.org/heat/latest/template_guide/openst
> ack.html#OS::Neutron::Quota
>
>     why because In coming
>>     cycle our idea is that a user just gives a VM-ID or Vm-name and we
>>     sync all the resources with which the vm is actually created
>>     ofcourse we cant have the same network in target-region so we may
>>     need the network-id or port-id from the target region from user so
>>     that kingbird will boot up the requested vm in the target region(s).
>>
>
> So it sounds like you are starting from the premise that users will create
> stuff in an ad-hoc way, then later discover that they need to replicate
> their ad-hoc deployments to multiple regions, and you're building a tool to
> do that. Heat, on the other hand, starts from the premise that users will
> invest a little up-front effort to create a declarative definition of their
> deployment, which they can then deploy repeatably in multiple (or the
> same!) regions. Our experience is that people have shown themselves to be
> quite willing to do this, because repeatable deployments have lots of
> benefits.

> Yes that is true. But, our idea is the same as what you have stated above
> ` *So it sounds like you are starting from the premise that users will
> create stuff in an ad-hoc way, then later discover that they need to
> replicate their ad-hoc deployments to multiple regions *` to reduce the
> repeatable deployments.
>
> Looking at the things you want to synchronise:
>
> * Quotas
>
> Synchronize after balancing quotas across regions. (our use-case is if an
> admin user wants to know the global limit for a tenant across regions then
> he can view, update and delete from one region using Kingbird.)
>
> Operators can already use Heat templates to manage these if they so desire.
>
> * Flavors
>
> Some clouds allow users to create flavors, and those users can use Heat
> templates to manage them already.
>

>
>
> Operators can *not* use Heat templates to manage flavors in the same way
> that that can with quotas, because the OS::Nova::Flavor resource was
> designed with the above use-case in mind instead. (Specifically, it doesn't
> allow you to set the name.) Support has been requested for it in the past,
> however, and given the other kinds of admin-only resources we have in Heat
> (Quotas, Keystone resources) it would be consistent to modify
> OS::Nova::Flavor to allow this additional use case.
>
>  Yes, it is true but we thought of handling these issues along with our
> use-cases.
>

> It's possible that operators could benefit from better/other tooling for
> Flavors and Quotas. In fact, the reason I've pushed back against some of
> the admin-facing stuff in Heat is that it often seems to me that Heat is an
> awkward tool for managing global-singleton or tenant-local-singleton
> administrator resources. It's definitely fine for multiple tools to
> co-exist, although a separate OpenStack service with an API seems like it
> could be overkill to me.
>
> Our idea is the same `manage adminstrator resource`
>
> * Keypairs
>
> This is a non-issue IMHO.
>
> * Images
>
> I agree with what I think Jay is suggesting here - not that there should
> be a single global Glance handling multiple regions (locality is important
> for images), but definitely some sort of multi-region support in Glance
> (e.g. a built-in way to automatically replicate an image to other regions)
> would be a better solution than an external service doing it. Glance is
> always looking for new contributors :)
>
> We definetly would love to try that and if possible contribute to glance.
>
> Though I really think the problem here is that there aren't good ways to
> automate image upload in general with the Glance v2 API; the multiregion
> part is just a for-loop. Allowing Glance to download an image from a URL
> (or even if it were limited to Swift objects) instead of having to upload
> one to it would allow us to resurrect OS::Glance::Image in Heat.
>
> Kingbird does`not` download image from a URL and then uploads it to
> glance rather it uses the existing image and the replicates it into the
> other region .
>
> Kingbird can also Sync Vm snapshot.(Yet to be committed. )
>
>
> https://github.com/openstack/kingbird/blob/master/kingbird/drivers/openstack/glance_v2.py#L149
>

> * Other user resources
>
> These are already handled, in a much more general way, by Heat.
>
>
> Honestly, it seems like a lot of wheels are being reinvented here. I think
> it would be more productive to start with a list of use cases and see
> whether the gaps can be covered by changes to existing services that they
> would consider in-scope.
>
>  Kingbird does have many features like quota-management and
> resource-management of which one is the Multi-region Orchestration.
>
>
> cheers,
> Zane.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


We really thank you for all the suggestions this definitely gives us a way
forward.  :)

-- 
Cheers !!!
Goutham Pratapa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180212/638d7967/attachment.html>


More information about the OpenStack-dev mailing list