[openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

Patrick Petit patrick.petit at bull.net
Tue Aug 13 13:50:18 UTC 2013


Hi Dina,
Sounds great! Speaking on behalf of Francois feel free to proceed with 
points below. I don't think he would have issues with that. We'll close 
the loop when he returns. BTW, did you get a chance to take a look at 
Haizea's design and implementation?
Thanks
Patrick
On 8/13/13 3:08 PM, Dina Belova wrote:
>
> Patrick, we are really glad we've found the way to deal with both use 
> cases.
>
>
> As for your patches, that are on review and were already merged, we 
> are thinking about the following actions to commit:
>
>
> 1) Oslo was merged, but it is a little bit old verdant (with setup and 
> version module, that are not really used now because of new per 
> project). So we (Mirantis) can update it as a first step.
>
> 2) We need to implement comfortable to use DB layer to allow using of 
> different DB types (SQL and NoSQL as well), so that's the second step. 
> Here we'll also create new abstractions like lease and physical or 
> virtual reservations (I think we can implement it really before end of 
> August).
>
>
> 3) After that we'll have the opportunity to modify Francois' patches 
> for the physical hosts reservation in the way to be a part of our new 
> common vision together.
>
>
> Thank you.
>
>
>
> On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit <patrick.petit at bull.net 
> <mailto:patrick.petit at bull.net>> wrote:
>
>     Hi Nikolay,
>     Please see comments inline.
>     Thanks
>     Patrick
>
>     On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:
>>
>>     Hi, again!
>>
>>
>>     Partick, I'll try to explain why do we belive in some base
>>     actions like instance starting/deleting in Climate. We are
>>     thinking about the following workflow (that will be quite
>>     comfortable and user friendly, and now we have more than one
>>     customer who really want it):
>>
>>
>>     1) User goes to the OpenStack dashboard and asks Heat to reserve
>>     several stacks.
>>
>>
>>     2) Heat goes to the Climate and creates all needed leases. Also
>>     Heat reserves all resources for these stacks.
>>
>>
>>     3) When time comes, user goes to the OpenStack cloud and here we
>>     think he wants to see already working stacks (ideal version) or
>>     (at least) already started. If no, user will have to go to the
>>     Dashboard and wake up all the stacks he or she reserved. This
>>     means several actions, that may be done for the user
>>     automatically, because it will be needed to do them no matter
>>     what is the aim for these stacks - if user reserves them, he /
>>     she needs them.
>>
>>
>>     We understand, that there are situations when these actions may
>>     be done by some other system (like some hypothetical Jenkins).
>>     But if we speak about users, this will be useful. We also
>>     understand that this default way of behavior should be
>>     implemented in some kind of long term life cycle management
>>     system (which is not Heat), but we have no one in the OpenStack
>>     now. Because the best may to implement it is to use Convection,
>>     that is only proposal now...
>>
>>
>>     That's why we think that for the behavior like "user just
>>     reserves resources and then does anything he / she wants to"
>>     physical leases are better variant, when user may reserve several
>>     nodes and use it in different ways. For the virtual reservations
>>     it will be better to start / delete them as a default way (for
>>     something unusual Heat may be used and modified).
>>
>     Okay. So let's bootstrap it this way then. There will be two
>     different ways the reservation service will deal with reservations
>     depending on whether its physical or virtual. All things being
>     equal, future will tell how things settle. We will focus on the
>     physical host reservation side of things. It think having this
>     contradictory debate helped to understand each others use cases
>     and requirements that the initial design has to cope with.
>     Francois who already submitted a bunch of code for review will not
>     return from vacation until the end of August. So things on our
>     side are a little on the standby until he returns but it might
>     help if you could take a look at it. I suggest you start with your
>     vision and we will iterate from there. Is that okay with you?
>
>
>>
>>     Do you think that this workflow is useful too and if so can you
>>     propose another implementation  variant for it?
>>
>>
>>     Thank you.
>>
>>
>>
>>
>>     On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit
>>     <patrick.petit at bull.net <mailto:patrick.petit at bull.net>> wrote:
>>
>>         On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:
>>>         Hello, Patrick!
>>>
>>>         We have several reasons to think that for the virtual
>>>         resources this possibility is interesting. If we speak about
>>>         physical resources, user may use them in the different ways,
>>>         that's why it is impossible to include base actions with
>>>         them to the reservation service. But speaking about virtual
>>>         reservations, let's imagine user wants to reserve virtual
>>>         machine. He knows everything about it - its parameters,
>>>         flavor and time to be leased for. Really, in this case user
>>>         wants to have already working (or at least starting to work)
>>>         reserved virtual machine and it would be great to include
>>>         this opportunity to the reservation service.
>>>         We are thinking about base actions for the virtual
>>>         reservations that will be supported by Climate, like
>>>         boot/delete for instance, create/delete for volume and
>>>         create/delete for the stacks. The same will be with volumes,
>>>         IPs, etc. As for more complicated behaviour, it may be
>>>         implemented in Heat. This will make reservations simpler to
>>>         use for the end users.
>>>
>>>         Don't you think so?
>>         Well yes and and no. It really depends upon what you put
>>         behind those lease actions. The view I am trying to sustain
>>         is separation of duties to keep the service simple,
>>         ubiquitous and non prescriptive of a certain kind of usage
>>         pattern. In other words, keep Climate for reservation of
>>         capacity (physical or virtual), Heat for orchestration, and
>>         so forth. ... Consider for example the case of reservation as
>>         a non technical act but rather as a business enabler for
>>         wholesales activities. Don't need, and probably don't want to
>>         start or stop any resource there. I do not deny that there
>>         are cases where it is desirable but then how reservations are
>>         used and composed together at the end of the day mainly
>>         depends on exogenous factors which couldn't be anticipated
>>         because they are driven by the business.
>>
>>         And so, rather than coupling reservations with wired resource
>>         instantiation actions, I would rather couple them with
>>         notifications that everybody can subscribe to (as opposed to
>>         the Resource Manager only) which would let users decide what
>>         to do with the life-cycle events. The what to do may very
>>         well be what you advocate i.e. start a full stack of reserved
>>         and interwoven resources, or at the other end of the
>>         spectrum, do nothing at all. This approach IMO would keep
>>         things more open.
>>>
>>>         P.S. Also we remember about the problem you mentioned some
>>>         letters ago - how to guarantee that user will have already
>>>         working and prepared host / VM / stack / etc. by the time
>>>         lease actually starts, no just "lease begins and preparing
>>>         process begins too". We are working on it now.
>>         Yes. I think I was explicitly referring to hosts
>>         instantiation also because there is no support of that in
>>         Nova API. Climate should support some kind of "reservation
>>         kick-in heads-up" notification whereby the provider and/or
>>         some automated provisioning tools could do the heavy lifting
>>         work of bringing physical hosts online before a hosts
>>         reservation lease starts. I think it doesn't have to be
>>         rocket-science either. It's probably sufficient to make
>>         Climate fire up a notification that say "Lease starting in x
>>         seconds", x being  an offset value against T0 that could be
>>         defined by the operator based on heuristics. A dedicated
>>         (e.g. IPMI) module of the Resource Manager for hosts
>>         reservation would subscribe as listener to those events.
>>>
>>>
>>>         On Thu, Aug 8, 2013 at 8:18 PM, Patrick Petit
>>>         <patrick.petit at bull.net <mailto:patrick.petit at bull.net>> wrote:
>>>
>>>             Hi Nikolay,
>>>
>>>             Relying on Heat for orchestration is obviously the right
>>>             thing to do. But there is still something in your design
>>>             approach that I am having difficulties to comprehend
>>>             since the beginning. Why do you keep thinking that
>>>             orchestration and reservation should be treated
>>>             together? That's adding unnecessary complexity IMHO. I
>>>             just don't get it. Wouldn't it be much simpler and
>>>             sufficient to say that there are pools of reserved
>>>             resources you create through the reservation service.
>>>             Those pools could be of different types i.e. host,
>>>             instance, volume, network,.., whatever if that's really
>>>             needed. Those pools are identified by a unique id that
>>>             you pass along when the resource is created. That's it.
>>>             You know, the AWS reservation service doesn't even care
>>>             about referencing a reservation when an instance is
>>>             created. The association between the two just happens
>>>             behind the scene. That would work in all scenarios,
>>>             manual, automatic, whatever... So, why do you care so
>>>             much about this in a first place?
>>>             Thanks,
>>>             Patrick
>>>
>>>             On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:
>>>>             Patrick, responding to your comments:
>>>>
>>>>             1) Dina mentioned "start automatically" and "start
>>>>             manually" only as examples of how these politics may
>>>>             look like. It doesn't seem to be a correct approach to
>>>>             put orchestration functionality (that belongs to Heat)
>>>>             in Climate. That's why now we can implement the basics
>>>>             like starting Heat stack, and for more complex actions
>>>>             we may later utilize something like Convection
>>>>             (Task-as-a-Service) project.
>>>>
>>>>             2) If we agree that Heat is the main consumer of
>>>>             Reservation-as-a-Service, we can agree that lease may
>>>>             be created according to one of the following scenarions
>>>>             (but not multiple):
>>>>             - a Heat stack (with requirements to stack's contents)
>>>>             as a resource to be reserved
>>>>             - some amount of physical hosts (random ones or
>>>>             filtered based on certain characteristics).
>>>>             - some amount of individual VMs OR Volumes OR IPs
>>>>
>>>>             3) Heat might be the main consumer of virtual
>>>>             reservations. If not, Heat will require development
>>>>             efforts in order to support:
>>>>             - reservation of a stack
>>>>             - waking up a reserved stack
>>>>             - performing all the usual orchestration work
>>>>
>>>>             We will support reservation of individual
>>>>             instance/volume/ IP etc, but the use case with "giving
>>>>             user already working group of connected VMs, volumes,
>>>>             networks" seems to be the most interesting one.
>>>>             As for Heat autoscaling, reservation of the maximum
>>>>             instances set in the Heat template (not the minimum
>>>>             value) has to be implemented in Heat. Some open
>>>>             questions remain though - like updating of Heat stack
>>>>             when user changes the template to support higher max
>>>>             number of running instances
>>>>
>>>>             4) As a user, I would of course want to have it already
>>>>             working, running any configured hosts/stacks/etc by the
>>>>             time lease starts. But in reality we can't predict how
>>>>             much time the preparation process should take for every
>>>>             single use case. So if you have an idea how this should
>>>>             be implemented, it would be great you share your opinion.
>>>>
>>>>
>>>>             _______________________________________________
>>>>             OpenStack-dev mailing list
>>>>             OpenStack-dev at lists.openstack.org  <mailto:OpenStack-dev at lists.openstack.org>
>>>>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>
>
>     _______________________________________________
>     OpenStack-dev mailing list
>     OpenStack-dev at lists.openstack.org
>     <mailto:OpenStack-dev at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> -- 
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130813/0252676e/attachment.html>


More information about the OpenStack-dev mailing list