[Openstack] OpenStack API, Reservation ID's and Num Instances ...

Sandy Walsh sandy.walsh at RACKSPACE.COM
Mon May 23 17:25:36 UTC 2011


Changing to UUID is a great thing to do, but not sure if it solves our problem. We still need to differentiate between an Instance ID and a Reservation ID.

Additionally, switching to UUID has to be a 2.0 thing, since it's going to bust all backwards compatibility. The ability to cast to int() is a general assumption in RS API clients.

With respect to "multiple single-shot requests", assume 10 schedulers pick up 10 instance requests concurrently. Their view of the world will be largely the same so they will all attempt to provision to the same host. VS a single request for 10 instances where the scheduler can be smart about where it attempts to place them.  And then there's the socket / api server load from 1000 single-shot requests as mentioned elsewhere in this thread. 

I agree that 3 & 4 may be nice to haves. I've simply heard explicit demand for #4 from customers and I don't believe the delta to get their is that high.

-S

PS> You're not speaking out of turn. I need to do a better job of articulating the zones/dist-sched architecture ... it's underway :)

________________________________________
From: openstack-bounces+sandy.walsh=rackspace.com at lists.launchpad.net [openstack-bounces+sandy.walsh=rackspace.com at lists.launchpad.net] on behalf of Mark Washenberger [mark.washenberger at rackspace.com]
Sent: Monday, May 23, 2011 1:54 PM
To: openstack at lists.launchpad.net
Subject: Re: [Openstack] OpenStack API, Reservation ID's and Num Instances ...

I'm totally on board with this as a future revision of the OS api. However it sounds like we need some sort of solution for 1.1.

> 1. We can't treat the InstanceID as a ReservationID since they do two different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the caller
> supposed to know if they're getting back a ReservationID or an InstanceID? How to
> they ask for updates for each (one returns a single value, one returns a list?).

Rather than overloading the two, could we just make instance-id a uuid asynchronous and pare down the amount of info returned in the server create response?

> 2. We need to handle "provision N instances" so the scheduler can effectively load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.

Are we worried about concurrent or rapid sequential requests?

Is there any way we could cut down on the erraticism by funneling these types of requests through a smaller set of schedulers? I'm very unfamiliar with the scheduler system but it seems like maybe routing choices at a higher level scheduler could help here.

3. and 4. sound like great features albeit ones that could wait on a future revision of the api.

Apologies if I'm speaking out of turn and should just read up on scheduler code!


"Sandy Walsh" <sandy.walsh at rackspace.com> said:

> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> Cool, I think you all understand the concerns here:
>
> 1. We can't treat the InstanceID as a ReservationID since they do two different
> things. InstanceID's are unique per instance and ReservationID's might span N
> instances. I don't like the idea of overloading these concepts. How is the caller
> supposed to know if they're getting back a ReservationID or an InstanceID? How to
> they ask for updates for each (one returns a single value, one returns a list?).
>
> 2. We need to handle "provision N instances" so the scheduler can effectively load
> balance the requests by looking at the current state of the system in a single
> view. Concurrent single-shot requests would be picked up by many different
> schedulers in many different zones and give an erratic distribution.
>
> 3. As Soren pointed out, we may want certain semantics around failure such as "all
> or nothing"
>
> 4. Other Nova users have mentioned a desire for instance requests such as "has
> GPU, is in North America and has a blue sticker on the box". If we try to do that
> with Flavors we need to clutter the Flavor table with most-common-denominator
> fields. We can handle this now with Zone/Host Capabilities and not have to extend
> the table at all. If you look at nova/tests/scheduler/test_host_filter.py you'll
> see an example of this in action. To Soren's point about "losing the ability to
> rely on a fixed set of topics in the message queue for doing scheduling" this is
> not the case, there are no new topics introduced. Instead there are simply extra
> arguments passed into the run_instance() method of the scheduler that understands
> these more complex instance requests.
>
> That said, I was thinking of adding a POST /zone/server command to support these
> extended operations. It wouldn't affect anything currently in place and makes it
> clear that this is a zone-specific operation. Existing EC2 and core OS API
> operations are performed as usual.
>
> Likewise, we need a way to query the results of a Reservation ID request without
> busting GET /servers/detail ... perhaps GET /zones/servers could do that?
>
> The downside is that now we have two ways to create an instance that needs to be
> tested, etc.
>
> -S
>
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is
> prohibited.
> If you receive this transmission in error, please notify us immediately by e-mail
> at abuse at rackspace.com, and delete the original message.
> Your cooperation is appreciated.
>
>



_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




More information about the Openstack mailing list