[placement][ptg] Enabling other projects to continue with placement or get started
Jay Pipes
jaypipes at gmail.com
Mon Apr 22 03:07:12 UTC 2019
My apologies for the late response, Jason. Comments inline.
On 04/10/2019 11:24 AM, Jason Anderson wrote:
> On 04/10/2019 05:47 AM, Dmitry Tantsur wrote:
> > On 4/9/19 7:20 PM, Jay Pipes wrote:
> >> On 04/09/2019 12:51 PM, Dmitry Tantsur wrote:
> >>> From ironic perspective there is no issue, but there is a critical
> >>> question to decide: when Ironic+Placement is used, which of them acts
> >>> as the final authority? If Ironic, then we need to teach Placement to
> >>> talk to its Allocation API when allocating a bare metal node. If
> >>> Placement, then we need to support Allocation API talking to
> >>> Placement. I suspect the latter is saner, but I'd like to hear more
> >>> opinions.
> >>
> >> Ironic (scheduler?) would request candidates from the placement
> >> service using the GET /allocation_candidates API. Ironic (scheduler?)
> >> would then claim the resources on a provider (a baremetal node) by
> >> calling the POST /allocations API.
> >
> > Okay, this matches my expectation.
> >
> > My concern will be with Blazar and reservations. If reservations happen
> > through Placement only, how will ironic know about them? I guess we need
> > to teach Blazar to talk to Ironic, which in turn will talk to Placement.
>
> Hmm. So, here's the problem: placement has no concept of time. [1]
>
> Placement only knows about one period of time: now. Placement doesn't
> have any concept of an allocation or an inventory existing at some
> point
> in the future or in the past.
>
> Just to play devil's advocate... what about changing/adding this? What
> if Placement did support an inventory having different states depending
> on time frame requested?
After 3+ years with the placement modeling, I've come to realize it was
a fundamental mistake to not include a temporal aspect to both the
inventories and allocations table schemas.
While I would *not* support a schema that had different "states" for an
inventory depending on the time frame requested, I *do* think that
adding a claim_time and release_time column to the allocations table and
a start_time and end_time column to the inventories table would allow
Placement to fulfill a simple reservation system using the same
transactional logic it currently uses.
> In my mind this would enable a more ideal division of
> responsibility:
>
> * Placement manages the availability of resources and maintains the
> single source of truth for inventory at a given time.
++
> * Blazar uses Placement as its default inventory backend. Blazar's
> main role now is business logic around quota and handling
> allocation/deallocation when a lease starts/ends.
Yes on Blazar handling the release of resources when the lease ends.
No on Blazar handling the acquisition of resources when the lease starts
(that would fundamentally be accomplished by Placement if Placement had
a temporal dimension to its allocations and inventories table schemas).
No on Blazar handling quota. Quota is a giant pain in the behind,
frankly. Trust me, you want no part of it ;) No matter how many
"dimensions" of quota slicing and dicing are made available, operators
will always want to add yet another dimension. If it's not quota
"classes", then it's different quotas per region, then different quotas
per AZ, then different quotas per aggregate, and on and on.
Never mind the whole "we confused quotas with rate-limiting" and "here
is a type of quota that is not consistently measurable" problems...
Anyway, my advice would be leave quotas alone if you can :)
> o But, Blazar could optionally use a different inventory backend,
> to allow standalone use (?)
Not sure why you'd want to do this. But, as Dima remarked in another
sub-thread of this conversation, the question about "which things should
a standalone service depend on" is a religious debate. (and a debate I
no longer have the energy to participate in)
> * Ironic uses Placement as its default inventory backend.
> o But, Ironic could optionally also manage its own inventory, to
> allow standalone use (?)
>
> To further tease out the relationships here, we should think about what
> makes the most sense for baremetal reservations done via Blazar. Should
> Blazar always go to Ironic for this, ignoring Nova entirely? Or should
> it go through Nova if Nova is being used? I believe Blazar still will
> always have to go through Nova for instance reservations at minimum.
Certainly Blazar will have to go through Nova *in its current
implementation*, since Blazar currently relies on host aggregates and
special aggregate and flavor metadata to "reserve" compute nodes.
> Keep in mind that Blazar is designed to integrate with arbitrary
> external services; currently it has integrations with Neutron (for
> provisioning Floating IPs as part of a lease), and it could support any
> number of other resources, like bandwidth on an uplink.
The flexibility for close integration with arbitrary services often
comes with a high price: complexity and potential code rot.
> Having learned more about Placement's design as a result of these
> threads, I'm excited about how it could make some things cleaner if it
> truly could handle the generic inventory management problem that
> advanced reservations pose.
If you will be in Denver, I'm happy to outline some ideas I had that
would pave a way for adding a temporal dimension to Placement's database
schema. I won't be able to implement these ideas, but I'm happy to share
them with you if you're interested.
Best,
-jay
> Therefore, Blazar must unfortunately keep all of the temporal state
> about reservations in its own data store. So, Ironic would actually
> have
> to talk to Blazar to create a reservation of some amount of resources
> and Blazar will need to call Placement to manage the state of resource
> inventory and allocations over time; as reservations are activated,
> Blazar will either create or swap allocation records in Placement to
> consume the Ironic resources for a tenant that made the reservation.
>
> Best,
> -jay
>
> [1] this was a mistake for which I take full responsibility.
>
>
> Cheers,
> /Jason
More information about the openstack-discuss
mailing list