[nova][scheduler] DiskFilter return 0 when launching pre-allocated disk volume

Sean Mooney smooney at redhat.com
Tue Mar 22 12:11:27 UTC 2022


On Mon, 2022-03-21 at 15:31 -0400, Alan Davis wrote:
> All,
> 
>  I'm having a problem with launching a volume that already exists. I'm not
> finding any indication why DiskFilter is saying there's not enough disk
> space available.
what release of nova are you using.
the disk filter has been deprecated for a number of years and removed in recent releases in favor
of counting disk in placement.

> This is not a new system, it's been in use for over a year and has more
> than 20 VM's running on it already. Creating a volume (either from image or
> from install iso) is my normal way of creating instances, so this isn't
> anything I haven't done many times.

instnaces that are booted form a volume shoudl have a flavor that requests 0 root gb
if you use a flavor that request root gb the disk filter will still 
> 
> Any ideas on why or what debugging steps I can take would be appreciated.
> 
> Here's the error from nova-scheduler.log
> 
> 2022-03-21 15:20:22.021 16602 INFO nova.filters
> [req-f8909a62-723d-4128-9e68-2fe445fa861d a676762eea994d08a7ce018c3ccbb0b8
> 1ab386267c2b4c969d551ea763c906dc - default default] Filtering removed all
> hosts for the request with instance ID
> '59dfe21d-c527-4fd7-ad50-39c144af93fc'. Filter results: ['RetryFilter:
> (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
> 'RamFilter: (start: 1, end: 1)', 'DiskFilter: (start: 1, end: 0)']
the retry filter ram filter and disk filter are not required in any release that has placment
so if you are using ~pike or later you do not need them enabled. placment will have already valdiated ram and disk
capasity and the retry filter is repleace with a set of alternate hosts that are selected when the vm is initally schduled.
> 
> vgs (yes, i know... LVM) shows plenty of space, as well
> 
>  VG             #PV #LV #SN Attr   VSize  VFree
>   centos_stack2    1   3   0 wz--n- 74.00g     0
>   cinder-volumes   1  51   1 wz--n- <9.10t <3.64t
> 
> openstack says:
> +--------------------------------+---------------------------------------+
> > Field                          | Value                                 |
> +--------------------------------+---------------------------------------+
> > attachments                    | []                                    |
> > availability_zone              | nova                                  |
> > bootable                       | true                                  |
> > consistencygroup_id            | None                                  |
> > created_at                     | 2022-03-21T15:44:44.000000            |
> > description                    | root disk for rh8.3 vm                |
> > encrypted                      | False                                 |
> > id                             | 3a2aab97-d49d-431c-824d-5fbdcac40fa3  |
> > migration_status               | None                                  |
> > multiattach                    | False                                 |
> > name                           | rhel83-str-root                       |
> > os-vol-host-attr:host          | stack2.xxxx at lvm#lvm    |
> > os-vol-mig-status-attr:migstat | None                                  |
> > os-vol-mig-status-attr:name_id | None                                  |
> > os-vol-tenant-attr:tenant_id   | 1ab386267c2b4c969d551ea763c906dc      |
> > properties                     |                                       |
> > replication_status             | None                                  |
> > size                           | 80                                    |
> > snapshot_id                    | None                                  |
> > source_volid                   | None                                  |
> > status                         | reserved                              |
> > type                           | iscsi                                 |
> > updated_at                     | 2022-03-21T19:20:20.000000            |
> > user_id                        | a676762eea994d08a7ce018c3ccbb0b8      |
> +--------------------------------+---------------------------------------+
> 




More information about the openstack-discuss mailing list