Snapshot error

Kaiser Wassilij wassilij.kaiser at dhbw-mannheim.de
Mon Jan 23 16:36:23 UTC 2023


 
Hi  Dmitriy Rabotyagov
 
glance --version
 
3.6.0
 
Where is this Information,"backend driver do you use for glance??" 
 
Did you mean glance-api service?







> openstack-discuss-request at lists.openstack.org hat am 23. Januar 2023 um 16:29
> geschrieben:
>
>
> Send openstack-discuss mailing list submissions to
> openstack-discuss at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
>
> or, via email, send a message with subject or body 'help' to
> openstack-discuss-request at lists.openstack.org
>
> You can reach the person managing the list at
> openstack-discuss-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of openstack-discuss digest..."
>
>
> Today's Topics:
>
> 1. Re: Snapshot error (Dmitriy Rabotyagov)
> 2. Re: Snapshots disappear during saving (Karera Tony)
> 3. Re: [kolla-ansible] [cinder] Setting up multiple LVM cinder
> backends located on different servers (Alan Bishop)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 23 Jan 2023 15:56:16 +0100
> From: Dmitriy Rabotyagov <noonedeadpunk at gmail.com>
> Cc: openstack-discuss at lists.openstack.org
> Subject: Re: Snapshot error
> Message-ID:
> <CAPd_6AuideLVJebPUX4zBf-Q7GmM4zVOVXjY=dsma8+PGiEnow at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi there,
>
> Can you kindly describe what the error actually is? Also what backend
> driver do you use for glance?
>
> As the mentioned bug should be already covered by 25.2.0 release and
> it was limited to Swift backend from what I got.
>
> ??, 23 ???. 2023 ?. ? 15:42, Kaiser Wassilij
> <wassilij.kaiser at dhbw-mannheim.de>:
> >
> >
> >
> > Hello,
> >
> > I upgraded from Victoria to Yoga.
> >
> > DISTRIB_ID="OSA"
> > DISTRIB_RELEASE="25.2.0"
> > DISTRIB_CODENAME="Yoga"
> > DISTRIB_DESCRIPTION="OpenStack-Ansible"
> >
> > I have error
> >
> > infra1-glance-container-dc13a04b glance-wsgi-api[282311]: 2023-01-23
> > 13:25:06.745 282311 INFO glance.api.v2.image_data
> > [req-10359154-0be4-4da1-9e85-2d94079c17b4 b2dca74976034d5b9925bdcb03470603
> > 021ce436ab004cde851055bac66370bc - default default] Unable to create trust:
> > no such option collect_timing in group [keystone_authtoken] Use the existing
> > user token.
> >
> > 2023-01-23 14:09:28.890 282298 DEBUG glance.api.v2.images
> > [req-bf237e2c-9aca-4089-a56f-dfa245afdcc6 b2dca74976034d5b9925bdcb03470603
> > 021ce436ab004cde851055bac66370bc - default default] The 'locations' list of
> > image 2995602a-b5da-4758-a1f9-f6b083815f9b is empty _format_image
> > /openstack/venvs/glance-25.2.0/lib/python3.8/site-packages/glance/api/v2/images.py
> >
> > https://bugs.launchpad.net/glance/+bug/1916052
> >
> >
> >
> > Do any have the same error and how did you solve it
> >
> >
> > Kind regards
> >
> >
>
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 23 Jan 2023 16:56:31 +0200
> From: Karera Tony <tonykarera at gmail.com>
> To: Sofia Enriquez <senrique at redhat.com>
> Cc: openstack-discuss <openstack-discuss at lists.openstack.org>
> Subject: Re: Snapshots disappear during saving
> Message-ID:
> <CA+69TL0PLcg0XHmpj9Wd55QtatHUHony6ciWKek-q6Ca6AfW+A at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello Sofia,
>
> It is actually Instance snapshot not Volume snapshot.
> I click on create Snapshot on the Instance options.
>
> Regards
>
> Tony Karera
>
>
>
>
> On Mon, Jan 23, 2023 at 3:59 PM Sofia Enriquez <senrique at redhat.com> wrote:
>
> > Hi Karera, hope this email finds you well
> >
> > We need more information in order to reproduce this issue.
> >
> > - Do you mind sharing c-vol logs of the operation to see if there's any
> > errors?
> > - How do you create the snapshot? Do you mind sharing the steps to
> > reproduce this?
> >
> > Thanks in advance,
> > Sofia
> >
> > On Mon, Jan 23, 2023 at 1:20 PM Karera Tony <tonykarera at gmail.com> wrote:
> >
> >> Dear Team,
> >>
> >> I am using Openstack Wallaby deployed using kolla-ansible.
> >>
> >> I installed Glance with the ceph backend and all was well.
> >> However when I create snapshots, they disappear when they are saved.
> >>
> >> Any idea on how to resolve this?
> >>
> >> Regards
> >>
> >> Tony Karera
> >>
> >>
> >>
> >
> > --
> >
> > Sof?a Enriquez
> >
> > she/her
> >
> > Software Engineer
> >
> > Red Hat PnT <https://www.redhat.com>
> >
> > IRC: @enriquetaso
> > @RedHat <https://twitter.com/redhat> Red Hat
> > <https://www.linkedin.com/company/red-hat> Red Hat
> > <https://www.facebook.com/RedHatInc>
> > <https://www.redhat.com>
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230123/1b0ffe85/attachment-0001.htm>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 23 Jan 2023 07:29:12 -0800
> From: Alan Bishop <abishop at redhat.com>
> To: A Monster <amonster369 at gmail.com>
> Cc: openstack-discuss <openstack-discuss at lists.openstack.org>
> Subject: Re: [kolla-ansible] [cinder] Setting up multiple LVM cinder
> backends located on different servers
> Message-ID:
> <CADO3vb5J3K2dBgbQWzmFYq_VsEaQCVBzW-5MGRTW-LA9=BGRfg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Sat, Jan 21, 2023 at 4:39 AM A Monster <amonster369 at gmail.com> wrote:
>
> > First of all thank you for your answer, it's exactly what I was looking
> > for,
> > What is still ambiguous for me is the name of the volume group I specified
> > in globals.yml file before running the deployment, the default value is
> > cinder-volumes, however after I added the second lvm backend, I kept the
> > same volume group for lvm-1 but chooses another name for lvm-2, was it
> > possible to keep the same nomination for both ? If not how can I specify
> > the different backends directly from globals.yml file if possible.
> >
>
> The LVM driver's volume_group option is significant to each LVM backend,
> but only to the LVM backends on that controller. In other words, two
> controllers can each have an LVM backend using the same "cinder-volumes"
> volume group. But if a controller is configured with multiple LVM backends,
> each backend must be configured with a unique volume_group. So, the answer
> to your question, "was it possible to keep the same nomination for both?"
> is yes.
>
> I'm not familiar with kolla-ansible and its globals.yml file, so I don't
> know if that file can be leveraged to provide a different volume_group
> value to each controller. The file name suggests it contains global
> settings that would be common to every node. You'll need to find a way to
> specify the value for the lvm-2 backend (the one that doesn't use
> "cinder-volumes"). Also bear in mind that "cinder-volumes" is the default
> value [1], so you don't even need to specify that for the backend that *is*
> using that value.
>
> [1]
> https://github.com/openstack/cinder/blob/4c9b76b9373a85f8dfae28f240bb130525e777af/cinder/volume/drivers/lvm.py#L48
>
> Alan
>
> On Fri, Jan 20, 2023, 20:51 Alan Bishop <abishop at redhat.com> wrote:
> >
> >>
> >>
> >> On Wed, Jan 18, 2023 at 6:38 AM A Monster <amonster369 at gmail.com> wrote:
> >>
> >>> I have an openstack configuration, with 3 controller nodes and multiple
> >>> compute nodes , one of the controllers has an LVM storage based on HDD
> >>> drives, while another one has an SDD one, and when I tried to configure
> >>> the
> >>> two different types of storage as cinder backends I faced a dilemma since
> >>> according to the documentation I have to specify the two different
> >>> backends
> >>> in the cinder configuration as it is explained here
> >>> <https://docs.openstack.org/cinder/latest/admin/multi-backend.html>
> >>> however and since I want to separate disks type when creating volumes, I
> >>> had to specify different backend names, but I don't know if this
> >>> configuration should be written in both the storage nodes, or should I
> >>> specify for each one of these storage nodes the configuration related to
> >>> its own type of disks.
> >>>
> >>
> >> The key factor in understanding how to configure the cinder-volume
> >> services for your use case is knowing how the volume services operate and
> >> how they interact with the other cinder services. In short, you only define
> >> backends in the cinder-volume service that "owns" that backend. If
> >> controller-X only handles lvm-X, then you only define that backend on that
> >> controller. Don't include any mention of lvm-Y if that one is handled by
> >> another controller. The other services (namely the api and schedulers)
> >> learn about the backends when each of them reports its status via cinder's
> >> internal RPC framework.
> >>
> >> This means your lvm-1 service running on one controller should only have
> >> the one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all
> >> to the lvm-3 backend on the other controller. Likewise, the other
> >> controller should only contain the lvm-3 backend, with its
> >> enabled_backends=lvm-3.
> >>
> >>
> >>> Now, I tried writing the same configuration for both nodes, but I found
> >>> out that the volume service related to server1 concerning disks in server2
> >>> is down, and the volume service in server2 concerning disks in server1 is
> >>> also down.
> >>>
> >>> $ openstack volume service
> >>> list+------------------+---------------------+------+---------+-------+----------------------------+|
> >>> Binary | Host | Zone | Status | State | Updated At
> >>> |+------------------+---------------------+------+---------+-------+----------------------------+|
> >>> cinder-scheduler | controller-01 | nova | enabled | up |
> >>> 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova |
> >>> enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler |
> >>> controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 ||
> >>> cinder-volume | controller-03 at lvm-1 | nova | enabled | up |
> >>> 2023-01-18T14:27:42.000000 || cinder-volume | controller-01 at lvm-1 |
> >>> nova | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume |
> >>> controller-01 at lvm-3 | nova | enabled | down |
> >>> 2023-01-18T14:09:42.000000 || cinder-volume | controller-03 at lvm-3 |
> >>> nova | enabled | down |
> >>> 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+
> >>>
> >>>
> >> Unless you do a fresh deployment, you will need to remove the invalid
> >> services that will always be down. Those would be the ones on controller-X
> >> where the backend is actually on controller-Y. You'll use the cinder-manage
> >> command to do that. From the data you supplied, it seems the lvm-1 backend
> >> is up on controller03, and the lvm-3 backend on that controller is down.
> >> The numbering seems backwards, but I stick with this example. To delete the
> >> lvm-3 backend, which is down because that backend is actually on another
> >> controller, you'd issue this command:
> >>
> >> $ cinder-manage service remove cinder-volume controller-03 at lvm-3
> >>
> >> Don't worry if you accidentally delete a "good" service. The list will be
> >> refreshed each time the cinder-volume services refresh their status.
> >>
> >>
> >>> This is the configuration I have written on the configuration files for
> >>> cinder_api _cinder_scheduler and cinder_volume for both servers.
> >>>
> >>> enabled_backends= lvm-1,lvm-3
> >>> [lvm-1]
> >>> volume_group = cinder-volumes
> >>> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
> >>> volume_backend_name = lvm-1
> >>> target_helper = lioadm
> >>> target_protocol = iscsi
> >>> report_discard_supported = true
> >>> [lvm-3]
> >>> volume_group=cinder-volumes-ssd
> >>> volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
> >>> volume_backend_name=lvm-3
> >>> target_helper = lioadm
> >>> target_protocol = iscsi
> >>> report_discard_supported = true
> >>>
> >>
> >> At a minimum, on each controller you need to remove all references to the
> >> backend that's actually on the other controller. The cinder-api and
> >> cinder-scheduler services don't need any backend configuration. That's
> >> because the backend sections and enabled_backends options are only relevant
> >> to the cinder-volume service, and are ignored by the other services.
> >>
> >> Alan
> >>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230123/2176a243/attachment.htm>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> openstack-discuss mailing list
> openstack-discuss at lists.openstack.org
>
>
> ------------------------------
>
> End of openstack-discuss Digest, Vol 51, Issue 69
> *************************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230123/02adad4d/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 3626 bytes
Desc: not available
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230123/02adad4d/attachment-0001.png>


More information about the openstack-discuss mailing list