[nova][dev] Any VMware resource pool and shares kind of feature available in openstack nova?
Matt Riedemann
mriedemos at gmail.com
Tue Mar 5 13:58:20 UTC 2019
On 3/4/2019 2:40 AM, Sanjay K wrote:
> scenario 1- say in a compute node, I have some development and
> production VMs hosted. So in that compute node, production VMs will have
> more priority than development VMs to contend for compute resources
> (RAM, vcpus, disk etc).
>
> case 2. when both prod and dev VMs are on a compute node, if one user
> reboots one production VM which was in shutdown/not-running state, the
> compute node should take this VM resource allocation as priority and
> readjust/redistribute resources among other existing VMs so that this VM
> should get the adequate or more resources for which it was initially
> created with. If it does not find the required resources on that
> compute, it should migrate or launch this VM on a different compute node
> based on user consent or just after giving some notification to user
> that this VM is started on a different compute.
>
> case 3. when multiple users simultaneously starts some dev and prod VMs,
> the production VMs should always get the resources allocated first from
> the compute. Then the other low priority VMs should get their resources
> share.
>
> In my case, the development and production VMs belong to different
> subnets. If I know correctly, there is no way nova is doing resource
> allocation dynamically to VMs once they are hosted on a compute node. It
> just takes the filters and weights to decide which compute the instance
> should run. I think the above decision can be handled before filtering
> and weighing phase kicks in to decide how much resource one VM should be
> given to.
>
> I find the preemptible instances feature useful after going through the
> discussions/videos, but not sure as of now whether it will solve my
> cases completely.
What you're asking for is not really how nova works. When you create a
server, the flavor defines the amount of each class of resource (VCPU,
MEMORY_MB, DISK_GB) the server is going to get and the scheduler finds a
host that can fulfill that request. Once the VMs are on a compute node,
they have the same "priority". Their resources are not added/subtracted
once they are "claimed" in the placement service, even when you stop the
VM. This is because those resources are reserved on that host for when
the user starts the VM back up.
It sounds like what you're looking for is related to a couple of public
cloud work group requests [1][2] where stopping a server is akin to
shelving it which frees up resources and then starting it would unshelve
the server and reschedule it. The related mailing list discussion about
that is here [3] (I couldn't find a prettier archive for openstack-dev).
[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679
[2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681
[3]
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg123170.html
--
Thanks,
Matt
More information about the openstack-discuss
mailing list