[openstack-dev] Next steps for Whole Host allocation / Pclouds
philip.day at hp.com
Tue Jan 21 16:30:02 UTC 2014
> -----Original Message-----
> From: Khanh-Toan Tran [mailto:khanh-toan.tran at cloudwatt.com]
> Sent: 21 January 2014 14:21
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Next steps for Whole Host allocation / Pclouds
> > Exactly - that's why I wanted to start this debate about the way
> > forward for the Pcloud Blueprint, which was heading into some kind of
> > middle ground. As per my original post, and it sounds like the three
> > of us are at least aligned I'm proposing to spilt this into two
> > streams:
> > i) A new BP that introduces the equivalent of AWS dedicated instances.
> Why do you want to transform pCloud into AWS dedicated instances? As I
> see it, pCloud is for requesting physical hosts (HostFlovors as in pcloud wiki)
> on which users can create their own instances (theoretically in unlimited
> Therefore it should be charged per physical server (HostFlavor), not by
> instances. It is completely different from AWS dedicated instances which is
> charged per instance. IMO, pcloud resembles Godrid Dedicated Server, not
> AWS Dedicated Instance.
> If you want to provide AWS dedicated instances typed service, then it would
> not be Pcloud, nor it is a continuation of the WholeHostAllocaiton blueprint,
> which , IMO, is damned well designed.
Thank you ;-)
I probably didn’t explain it very well, but I wasn't trying to say that dedicated instances were a complete replacement for pClouds - more that as a simpler concept they would provide one of the use cases that originally drove pClouds in a much simpler form.
Based on the feedback I got the problem with the more general pClouds scope as it currently stands is that its somewhere between a VM isolation model and fully delegated control of a specific set of hosts, and as such that doesn’t really feel like a tenable place to end up.
As a simple VM isolation model (which is where I made the comparison with dedicated instances) its more complex that it needs to be.
As a way of allowing a user to manage some set of hosts its fine for allocation/deallocation and scheduling - and if that was the full set of operations that were ever going to be needed then maybe it would be fine. But as soon as you start to look at the other operations that folks want to really deliver a "cloud within a cloud" type concept (specific scheduler config, control placement, define and manage flavors, etc) I think you'd end up replicating large parts of the existing code. An alternative is to extend the roles model within Nova somehow so that roles can be scoped to a specific aggregate or set of aggregates, but that's a pretty big change from where we are and would only every cover Nova. So I came round to thinking that the better way to have that kind of delegated control is to actually set up separate Nova's each covering the hosts that you want to delegate and sharing other services like Glance, Cinder, and Neutron - esp as the promise of TripleO is that it's going to make this much easier to do.
If there's value in just keeping pClouds as a host allocation feature, and not trying to go any further into the "delegated admin" model than the few simple features already included in the PoC then that's also useful feedback.
> It'll be just another scheduler job.
> Well, I did not say that it's not worth pursuing ; I just say that
> WholeHostAllocation is worth being kept pcloud.
> > User - Only has to specify that at boot time that the instance must
> > be on a host used exclusively by that tenant.
> > Scheduler - ether finds a hoist which matches this constraint or it
> > doesn't. No linkage to aggregates (other than that from other filters),
> > no need
> > for the aggregate to have been pre-configured
> > Compute Manager - has to check the constraint (as with any other
> > scheduler limit) and add the info that this is a dedicated instance to
> > notification messages
> > Operator - has to manage capacity as they do for any other such
> > constraint (it is a significant capacity mgmt issue, but no worse in
> > my mind that having flavors that can consume most of a host) , and
> > work out how they want to charge for such a model (flat rate
> > additional charge for first such instance, charge each time a new host
> > is used, etc).
> How about using migration for releasing compute hosts for new allocation? In
> standard configuration, admin would use LoadBalancing for his computes.
> Thus if we don't have a dedicated resources pool (this comes back to
> aggregate configuration), then all hosts would be used, which leaves no host
> empty for hosting dedicated instances.
In either case the cloud operator has to do a degree of capacity management. Dedicated instances (as a simple scheduler feature) are unlikely to work with spreading configuration. On the other hand with pClouds the operator also has to maintain an explicit free pool of hosts, and again with a spread allocator they're unlikely to be able to find new free hosts to add to that pool (basically spread allocation is a bad thing if you want to be able to free up hosts easily for either model ;-)
> > I think there is clear water between this and the existing aggregate
> > based isolation. I also think this is a different use case from reservations.
> > It's
> > *mostly* like a new scheduler hint, but because it has billing impacts
> > I think it needs to be more than just that - for example the ability
> > to request a dedicated instance is something that should be controlled
> > by a specific role.
> Agreed. The billing is rather the problem here. Nova can handle this all right,
> but how this new functionality cope with the billing model. Basically, which
> information is recorded, and where.
For dedicated instances then the fact that its dedicated would need to be recorded as a property of the instance and passed through to Billing as part of the notification message (just as properties like flavor are). The scheduler needs to know which instances are dedicated so that it can implement the required anti affinity for any instances other than ones belonging to that tenant - so its more than just a property for initial scheduling.
More information about the OpenStack-dev