[Openstack-operators] [Nova] Reconciling flavors and block device mappings

Kostiantyn.Volenbovskyi at swisscom.com Kostiantyn.Volenbovskyi at swisscom.com
Fri Aug 26 07:44:44 UTC 2016


Hi, 
option 1 (=that's what patches suggest) sounds totally fine.
Option 3 > Allow block device mappings, when present, to mostly determine instance  packing 
sounds like option 1+additional logic (=keyword 'mostly') 
I think I miss to understand the part of 'undermining the purpose of the flavor'
Why new behavior might require one more parameter to limit number of instances of host? 
Isn't it that those VMs will be under control of other flavor constraints, such as CPU and RAM anyway and those will be the ones controlling 'instance packing'?
Does option 3 covers In case someone relied on eg. flavor root disk for disk volume booted from volume - and now instance packing will change once patches are implemented?

BR, 
Konstantin

> -----Original Message-----
> From: Andrew Laski [mailto:andrew at lascii.com]
> Sent: Thursday, August 25, 2016 10:20 PM
> To: openstack-dev at lists.openstack.org
> Cc: openstack-operators at lists.openstack.org
> Subject: [Openstack-operators] [Nova] Reconciling flavors and block device
> mappings
> 
> Cross posting to gather some operator feedback.
> 
> There have been a couple of contentious patches gathering attention recently
> about how to handle the case where a block device mapping supersedes flavor
> information. Before moving forward on either of those I think we should have a
> discussion about how best to handle the general case, and how to handle any
> changes in behavior that results from that.
> 
> There are two cases presented:
> 
> 1. A user boots an instance using a Cinder volume as a root disk, however the
> flavor specifies root_gb = x where x > 0. The current behavior in Nova is that the
> scheduler is given the flavor root_gb info to take into account during scheduling.
> This may disqualify some hosts from receiving the instance even though that disk
> space  is not necessary because the root disk is a remote volume.
> https://review.openstack.org/#/c/200870/
> 
> 2. A user boots an instance and uses the block device mapping parameters to
> specify a swap or ephemeral disk size that is less than specified on the flavor.
> This leads to the same problem as above, the scheduler is provided information
> that doesn't match the actual disk space to be consumed.
> https://review.openstack.org/#/c/352522/
> 
> Now the issue: while it's easy enough to provide proper information to the
> scheduler on what the actual disk consumption will be when using block device
> mappings that undermines one of the purposes of flavors which is to control
> instance packing on hosts. So the outstanding question is to what extent should
> users have the ability to use block device mappings to bypass flavor constraints?
> 
> One other thing to note is that while a flavor constrains how much local disk is
> used it does not constrain volume size at all. So a user can specify an
> ephemeral/swap disk <= to what the flavor provides but can have an arbitrary
> sized root disk if it's a remote volume.
> 
> Some possibilities:
> 
> Completely allow block device mappings, when present, to determine instance
> packing. This is what the patches above propose and there's a strong desire for
> this behavior from some folks. But changes how many instances may fit on a
> host which could be undesirable to some.
> 
> Keep the status quo. It's clear that is undesirable based on the bug reports and
> proposed patches above.
> 
> Allow block device mappings, when present, to mostly determine instance
> packing. By that I mean that the scheduler only takes into account local disk that
> would be consumed, but we add additional configuration to Nova which limits
> the number of instance that can be placed on a host. This is a compromise
> solution but I fear that a single int value does not meet the needs of deployers
> wishing to limit instances on a host. They want it to take into account cpu
> allocations and ram and disk, in short a flavor :)
> 
> And of course there may be some other unconsidered solution. That's where
> you, dear reader, come in.
> 
> Thoughts?
> 
> -Andrew
> 
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



More information about the OpenStack-operators mailing list