[telemetry] volume_type_id stored instead of volume_type name

Rafael Weingärtner rafaelweingartner at gmail.com
Mon Jun 17 14:44:25 UTC 2019


Just to update this on a public list:

When the event data is created, Cinder is setting the volume type as the
volume type ID. On the other hand, the pollsters are always updating the
volume_type value in Gnocchi, but for every event, this value is then
updated back to the ID (it is like a "flight" between these two subsystems
to see who updates the status last). The thing is that, deleted volumes are
not polled. Therefore, the last update is executed by the event processing
systems, which updates the value to the volume type ID. This explains why
only deleted volumes were perceived with the volume type being set as the
ID.

So, to fix it. We need to decide which "standard" we want to use. At first
sight, it looks more natural (when integrating system-system via API) to
use an ID, but for humans when accessing Cinder API, it might be better to
see the volume type name instead. Because the API is publicly available,
and we never know what consumes it, I would say that changing from volume
type name to volume type ID can break things that we are not aware of. On
the other hand, fixing the event data should not break anything because we
know (hopefully) where that message is pushed to.

We decided to move on (internally), and fix the event creation code, and
use the volume type name there. As soon as we finish internally, we will
open a push upstream PR.

On Thu, Jun 6, 2019 at 5:22 AM Florian Engelmann <
florian.engelmann at everyware.ch> wrote:

> Hi,
>
> some volumes are stored with the volume_type Id instead of the
> volume_type name:
>
> openstack metric resource history --details
> b5496a42-c766-4267-9248-6149aa9dd483 -c id -c revision_start -c
> revision_end -c instance_id -c volume_type
>
> +--------------------------------------+----------------------------------+----------------------------------+--------------------------------------+--------------------------------------+
> | id                                   | revision_start  | revision_end
>                     | instance_id    | volume_type
>     |
>
> +--------------------------------------+----------------------------------+----------------------------------+--------------------------------------+--------------------------------------+
> | b5496a42-c766-4267-9248-6149aa9dd483 |
> 2019-05-08T07:21:35.354474+00:00 | 2019-05-21T09:18:32.767426+00:00 |
> 662998da-c3d1-45c5-9120-2cff6240e3b6 | v-ssd-std    |
> | b5496a42-c766-4267-9248-6149aa9dd483 |
> 2019-05-21T09:18:32.767426+00:00 | 2019-05-21T09:18:32.845700+00:00 |
> 662998da-c3d1-45c5-9120-2cff6240e3b6 | v-ssd-std    |
> | b5496a42-c766-4267-9248-6149aa9dd483 |
> 2019-05-21T09:18:32.845700+00:00 | None                             |
> 662998da-c3d1-45c5-9120-2cff6240e3b6 |
> 8bd7e1b1-3396-49bf-802c-8c31a9444895 |
>
> +--------------------------------------+----------------------------------+----------------------------------+--------------------------------------+--------------------------------------+
>
>
> I was not able to find anything fishy in ceilometer. So I guess it could
> be some event/notification with a wrong payload?
>
> Could anyone please verify this error is not uniq to our (rocky)
> environment by running:
>
> openstack metric resource list --type volume  -c id -c volume_type
>
>
> All the best,
> Florian
>


-- 
Rafael Weingärtner
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190617/f74096b1/attachment.html>


More information about the openstack-discuss mailing list