[openstack-dev] Climate Incubation Application
Dina Belova
dbelova at mirantis.com
Fri Mar 14 12:28:53 UTC 2014
Russell, first of all thanks for your opinion and taking part in this
discussion.
> What we need to dig in to is *why* do you feel it needs to be global?
>
> I'm trying to understand what you're saying here ... do you mean that
> since we're trying to get to where there's a global scheduler, that it
> makes sense there should be a central point for this, even if the API is
> through the existing compute/networking/storage APIs?
>
> If so, I think that makes sense. However, until we actually have
> something for scheduling, I think we should look at implementing all of
> this in the services, and perhaps share some code with a Python library.
Well, let me give you some reasons I'm thinking about speaking about
separated service with its own endpoints, etc.
* as you said, we propose different resources reservations to be
implemented for OS, and compute resources (VMs and hosts) are not only ones
for that;
* there should be support of time management, checking leases statuses,
sending user notifications, etc. - even if that'll be implemented as
library, it'll need separately running service in Nova, because there will
be some specific periodic tasks and so on. Of course, that might be part of
nova-scheduler, but in this case such things as sending notifications will
look strange here - and that will allow to manage only VMs, not hosts, at
least if we are speaking about traditional Nova scheduling process;
* and the last: previous points might be implemented as some library, and
work ok, I quite agree with you here. Although in this case there will be
no centralised point of leases management, as there is no one point for
quotas management now. And if for quotas that's uncomfortable to manage
them in case of huge clouds, for leases it will be simply impossible to
have one picture of what will be going with all resources in future - as
there are many things to keep track on - compute capacity, storage
capacity, etc.
The last point seems most important for me, as the idea of centralised
resource time management looks better for me than idea of each service
having simply the same code working on its own reservation, plus the fact
we consider that some scheduling dependencies could happen with
heterogenous resources like reserving a volume with an instance booting on
it. I quite agree that for user it'll be more comfortable to use services
as is, and as Sylvain said, that might implemented quite nice due to, for
example, Nova extensions (as it's done now for VM reservations). But at the
same moment all logic related to leases will be in one place, allowing
cloud administrators manage cloud capacity usage in time from one place.
And I'm not talking about additional load to core reviewers of all projects
in case of implementing that feature in every single project, although
there is already existing team on Climate. That's not the main thing.
As said that's my personal opinion, and I'll be really glad to discuss this
problem and solve it in the best way chosen by community with taking into
account different points of view and ideas.
Thanks
On Thu, Mar 13, 2014 at 6:44 PM, Russell Bryant <rbryant at redhat.com> wrote:
> On 03/12/2014 12:14 PM, Sylvain Bauza wrote:
> > Hi Russell,
> > Thanks for replying,
> >
> >
> > 2014-03-12 16:46 GMT+01:00 Russell Bryant <rbryant at redhat.com
> > <mailto:rbryant at redhat.com>>:
> > The biggest concern seemed to be that we weren't sure whether Climate
> > makes sense as an independent project or not. We think it may make
> more
> > sense to integrate what Climate does today into Nova directly. More
> > generally, we think reservations of resources may best belong in the
> > APIs responsible for managing those resources, similar to how quota
> > management for resources lives in the resource APIs.
> >
> > There is some expectation that this type of functionality will extend
> > beyond Nova, but for that we could look at creating a shared library
> of
> > code to ease implementing this sort of thing in each API that needs
> it.
> >
> >
> >
> > That's really a good question, so maybe I could give some feedback on
> > how we deal with the existing use-cases.
> > About the possible integration with Nova, that's already something we
> > did for the virtual instances use-case, thanks to an API extension
> > responsible for checking if a scheduler hint called 'reservation' was
> > spent, and if so, take use of the python-climateclient package to send a
> > request to Climate.
> >
> > I truly agree with the fact that possibly users should not use a
> > separate API for reserving resources, but that would be worth duty for
> > the project itself (Nova, Cinder or even Heat). That said, we think that
> > there is need for having a global ordonancer managing resources and not
> > siloing the resources. Hence that's why we still think there is still a
> > need for a Climate Manager.
>
> What we need to dig in to is *why* do you feel it needs to be global?
>
> I'm trying to understand what you're saying here ... do you mean that
> since we're trying to get to where there's a global scheduler, that it
> makes sense there should be a central point for this, even if the API is
> through the existing compute/networking/storage APIs?
>
> If so, I think that makes sense. However, until we actually have
> something for scheduling, I think we should look at implementing all of
> this in the services, and perhaps share some code with a Python library.
> So, I'm thinking along the lines of ...
>
> 1) Take what Climate does today and work to integrate it into Nova,
> using as much of the existing Climate code as makes sense. Be careful
> about coupling in Nova so that we can easily split out the right code
> into a library once we're ready to work on integration in another project.
>
> 2) In parallel, continue working on decoupling nova-scheduler from the
> rest of Nova, so that we can split it out into its own project.
>
> 3) Once the scheduler is split out, re-visit what part of reservations
> functionality belongs in the new scheduling project and what parts
> should remain in each of the projects responsible for managing resources.
>
> > Once I said that, there are different ways to plug in with the Manager,
> > our proposal is to deliver a REST API and a python client so that there
> > could be still some operator access for managing the resources if
> > needed. The other way would be to only expose an RPC interface like the
> > scheduler does at the moment but as the move to Pecan/WSME is already
> > close to be done (reviews currently in progress), that's still a good
> > opportunity for leveraging the existing bits of code.
>
> Yes, I would want to use as much of the existing code as possible.
>
> As I said above, I just think it's premature to make this its own
> project on its own, unless we're able to look at scheduling more broadly
> as its own project.
>
> --
> Russell Bryant
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Best regards,
Dina Belova
Software Engineer
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140314/03d8c5cc/attachment.html>
More information about the OpenStack-dev
mailing list