<!DOCTYPE html>
<html><head>
<meta charset="UTF-8">
<style type="text/css">.mceResizeHandle {position: absolute;border: 1px solid black;background: #FFF;width: 5px;height: 5px;z-index: 10000}.mceResizeHandle:hover {background: #000}img[data-mce-selected] {outline: 1px solid black}img.mceClonedResizable, table.mceClonedResizable {position: absolute;outline: 1px dashed black;opacity: .5;z-index: 10000}
</style></head><body style=""><div> </div>
<div>Hi Dmitriy Rabotyagov</div>
<div> </div>
<div>glance --version</div>
<div> </div>
<div>3.6.0</div>
<div> </div>
<div>Where is this Information,"backend driver do you use for glance??" </div>
<div> </div>
<div>Did you mean glance-api service?</div>
<div>
<pre id="tw-target-text" class="tw-data-text tw-text-large tw-ta" dir="ltr" data-placeholder="Übersetzung"><span class="Y2IQFc"><img src="cid:225100c7b4aa456fb61c7202b9a8c7d1@Open-Xchange" border="0" alt=""><br><br><br><br><br></span></pre>
</div>
<div><br>> openstack-discuss-request@lists.openstack.org hat am 23. Januar 2023 um 16:29 geschrieben:<br>> <br>> <br>> Send openstack-discuss mailing list submissions to<br>> openstack-discuss@lists.openstack.org<br>> <br>> To subscribe or unsubscribe via the World Wide Web, visit<br>> https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss<br>> <br>> or, via email, send a message with subject or body 'help' to<br>> openstack-discuss-request@lists.openstack.org<br>> <br>> You can reach the person managing the list at<br>> openstack-discuss-owner@lists.openstack.org<br>> <br>> When replying, please edit your Subject line so it is more specific<br>> than "Re: Contents of openstack-discuss digest..."<br>> <br>> <br>> Today's Topics:<br>> <br>> 1. Re: Snapshot error (Dmitriy Rabotyagov)<br>> 2. Re: Snapshots disappear during saving (Karera Tony)<br>> 3. Re: [kolla-ansible] [cinder] Setting up multiple LVM cinder<br>> backends located on different servers (Alan Bishop)<br>> <br>> <br>> ----------------------------------------------------------------------<br>> <br>> Message: 1<br>> Date: Mon, 23 Jan 2023 15:56:16 +0100<br>> From: Dmitriy Rabotyagov <noonedeadpunk@gmail.com><br>> Cc: openstack-discuss@lists.openstack.org<br>> Subject: Re: Snapshot error<br>> Message-ID:<br>> <CAPd_6AuideLVJebPUX4zBf-Q7GmM4zVOVXjY=dsma8+PGiEnow@mail.gmail.com><br>> Content-Type: text/plain; charset="UTF-8"<br>> <br>> Hi there,<br>> <br>> Can you kindly describe what the error actually is? Also what backend<br>> driver do you use for glance?<br>> <br>> As the mentioned bug should be already covered by 25.2.0 release and<br>> it was limited to Swift backend from what I got.<br>> <br>> ??, 23 ???. 2023 ?. ? 15:42, Kaiser Wassilij <wassilij.kaiser@dhbw-mannheim.de>:<br>> ><br>> ><br>> ><br>> > Hello,<br>> ><br>> > I upgraded from Victoria to Yoga.<br>> ><br>> > DISTRIB_ID="OSA"<br>> > DISTRIB_RELEASE="25.2.0"<br>> > DISTRIB_CODENAME="Yoga"<br>> > DISTRIB_DESCRIPTION="OpenStack-Ansible"<br>> ><br>> > I have error<br>> ><br>> > infra1-glance-container-dc13a04b glance-wsgi-api[282311]: 2023-01-23 13:25:06.745 282311 INFO glance.api.v2.image_data [req-10359154-0be4-4da1-9e85-2d94079c17b4 b2dca74976034d5b9925bdcb03470603 021ce436ab004cde851055bac66370bc - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token.<br>> ><br>> > 2023-01-23 14:09:28.890 282298 DEBUG glance.api.v2.images [req-bf237e2c-9aca-4089-a56f-dfa245afdcc6 b2dca74976034d5b9925bdcb03470603 021ce436ab004cde851055bac66370bc - default default] The 'locations' list of image 2995602a-b5da-4758-a1f9-f6b083815f9b is empty _format_image /openstack/venvs/glance-25.2.0/lib/python3.8/site-packages/glance/api/v2/images.py<br>> ><br>> > https://bugs.launchpad.net/glance/+bug/1916052<br>> ><br>> ><br>> ><br>> > Do any have the same error and how did you solve it<br>> ><br>> ><br>> > Kind regards<br>> ><br>> ><br>> <br>> <br>> <br>> ------------------------------<br>> <br>> Message: 2<br>> Date: Mon, 23 Jan 2023 16:56:31 +0200<br>> From: Karera Tony <tonykarera@gmail.com><br>> To: Sofia Enriquez <senrique@redhat.com><br>> Cc: openstack-discuss <openstack-discuss@lists.openstack.org><br>> Subject: Re: Snapshots disappear during saving<br>> Message-ID:<br>> <CA+69TL0PLcg0XHmpj9Wd55QtatHUHony6ciWKek-q6Ca6AfW+A@mail.gmail.com><br>> Content-Type: text/plain; charset="utf-8"<br>> <br>> Hello Sofia,<br>> <br>> It is actually Instance snapshot not Volume snapshot.<br>> I click on create Snapshot on the Instance options.<br>> <br>> Regards<br>> <br>> Tony Karera<br>> <br>> <br>> <br>> <br>> On Mon, Jan 23, 2023 at 3:59 PM Sofia Enriquez <senrique@redhat.com> wrote:<br>> <br>> > Hi Karera, hope this email finds you well<br>> ><br>> > We need more information in order to reproduce this issue.<br>> ><br>> > - Do you mind sharing c-vol logs of the operation to see if there's any<br>> > errors?<br>> > - How do you create the snapshot? Do you mind sharing the steps to<br>> > reproduce this?<br>> ><br>> > Thanks in advance,<br>> > Sofia<br>> ><br>> > On Mon, Jan 23, 2023 at 1:20 PM Karera Tony <tonykarera@gmail.com> wrote:<br>> ><br>> >> Dear Team,<br>> >><br>> >> I am using Openstack Wallaby deployed using kolla-ansible.<br>> >><br>> >> I installed Glance with the ceph backend and all was well.<br>> >> However when I create snapshots, they disappear when they are saved.<br>> >><br>> >> Any idea on how to resolve this?<br>> >><br>> >> Regards<br>> >><br>> >> Tony Karera<br>> >><br>> >><br>> >><br>> ><br>> > --<br>> ><br>> > Sof?a Enriquez<br>> ><br>> > she/her<br>> ><br>> > Software Engineer<br>> ><br>> > Red Hat PnT <https://www.redhat.com><br>> ><br>> > IRC: @enriquetaso<br>> > @RedHat <https://twitter.com/redhat> Red Hat<br>> > <https://www.linkedin.com/company/red-hat> Red Hat<br>> > <https://www.facebook.com/RedHatInc><br>> > <https://www.redhat.com><br>> ><br>> ><br>> -------------- next part --------------<br>> An HTML attachment was scrubbed...<br>> URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230123/1b0ffe85/attachment-0001.htm><br>> <br>> ------------------------------<br>> <br>> Message: 3<br>> Date: Mon, 23 Jan 2023 07:29:12 -0800<br>> From: Alan Bishop <abishop@redhat.com><br>> To: A Monster <amonster369@gmail.com><br>> Cc: openstack-discuss <openstack-discuss@lists.openstack.org><br>> Subject: Re: [kolla-ansible] [cinder] Setting up multiple LVM cinder<br>> backends located on different servers<br>> Message-ID:<br>> <CADO3vb5J3K2dBgbQWzmFYq_VsEaQCVBzW-5MGRTW-LA9=BGRfg@mail.gmail.com><br>> Content-Type: text/plain; charset="utf-8"<br>> <br>> On Sat, Jan 21, 2023 at 4:39 AM A Monster <amonster369@gmail.com> wrote:<br>> <br>> > First of all thank you for your answer, it's exactly what I was looking<br>> > for,<br>> > What is still ambiguous for me is the name of the volume group I specified<br>> > in globals.yml file before running the deployment, the default value is<br>> > cinder-volumes, however after I added the second lvm backend, I kept the<br>> > same volume group for lvm-1 but chooses another name for lvm-2, was it<br>> > possible to keep the same nomination for both ? If not how can I specify<br>> > the different backends directly from globals.yml file if possible.<br>> ><br>> <br>> The LVM driver's volume_group option is significant to each LVM backend,<br>> but only to the LVM backends on that controller. In other words, two<br>> controllers can each have an LVM backend using the same "cinder-volumes"<br>> volume group. But if a controller is configured with multiple LVM backends,<br>> each backend must be configured with a unique volume_group. So, the answer<br>> to your question, "was it possible to keep the same nomination for both?"<br>> is yes.<br>> <br>> I'm not familiar with kolla-ansible and its globals.yml file, so I don't<br>> know if that file can be leveraged to provide a different volume_group<br>> value to each controller. The file name suggests it contains global<br>> settings that would be common to every node. You'll need to find a way to<br>> specify the value for the lvm-2 backend (the one that doesn't use<br>> "cinder-volumes"). Also bear in mind that "cinder-volumes" is the default<br>> value [1], so you don't even need to specify that for the backend that *is*<br>> using that value.<br>> <br>> [1]<br>> https://github.com/openstack/cinder/blob/4c9b76b9373a85f8dfae28f240bb130525e777af/cinder/volume/drivers/lvm.py#L48<br>> <br>> Alan<br>> <br>> On Fri, Jan 20, 2023, 20:51 Alan Bishop <abishop@redhat.com> wrote:<br>> ><br>> >><br>> >><br>> >> On Wed, Jan 18, 2023 at 6:38 AM A Monster <amonster369@gmail.com> wrote:<br>> >><br>> >>> I have an openstack configuration, with 3 controller nodes and multiple<br>> >>> compute nodes , one of the controllers has an LVM storage based on HDD<br>> >>> drives, while another one has an SDD one, and when I tried to configure the<br>> >>> two different types of storage as cinder backends I faced a dilemma since<br>> >>> according to the documentation I have to specify the two different backends<br>> >>> in the cinder configuration as it is explained here<br>> >>> <https://docs.openstack.org/cinder/latest/admin/multi-backend.html><br>> >>> however and since I want to separate disks type when creating volumes, I<br>> >>> had to specify different backend names, but I don't know if this<br>> >>> configuration should be written in both the storage nodes, or should I<br>> >>> specify for each one of these storage nodes the configuration related to<br>> >>> its own type of disks.<br>> >>><br>> >><br>> >> The key factor in understanding how to configure the cinder-volume<br>> >> services for your use case is knowing how the volume services operate and<br>> >> how they interact with the other cinder services. In short, you only define<br>> >> backends in the cinder-volume service that "owns" that backend. If<br>> >> controller-X only handles lvm-X, then you only define that backend on that<br>> >> controller. Don't include any mention of lvm-Y if that one is handled by<br>> >> another controller. The other services (namely the api and schedulers)<br>> >> learn about the backends when each of them reports its status via cinder's<br>> >> internal RPC framework.<br>> >><br>> >> This means your lvm-1 service running on one controller should only have<br>> >> the one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all<br>> >> to the lvm-3 backend on the other controller. Likewise, the other<br>> >> controller should only contain the lvm-3 backend, with its<br>> >> enabled_backends=lvm-3.<br>> >><br>> >><br>> >>> Now, I tried writing the same configuration for both nodes, but I found<br>> >>> out that the volume service related to server1 concerning disks in server2<br>> >>> is down, and the volume service in server2 concerning disks in server1 is<br>> >>> also down.<br>> >>><br>> >>> $ openstack volume service<br>> >>> list+------------------+---------------------+------+---------+-------+----------------------------+|<br>> >>> Binary | Host | Zone | Status | State | Updated At<br>> >>> |+------------------+---------------------+------+---------+-------+----------------------------+|<br>> >>> cinder-scheduler | controller-01 | nova | enabled | up |<br>> >>> 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova |<br>> >>> enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler |<br>> >>> controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 ||<br>> >>> cinder-volume | controller-03@lvm-1 | nova | enabled | up |<br>> >>> 2023-01-18T14:27:42.000000 || cinder-volume | controller-01@lvm-1 |<br>> >>> nova | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume |<br>> >>> controller-01@lvm-3 | nova | enabled | down |<br>> >>> 2023-01-18T14:09:42.000000 || cinder-volume | controller-03@lvm-3 |<br>> >>> nova | enabled | down |<br>> >>> 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+<br>> >>><br>> >>><br>> >> Unless you do a fresh deployment, you will need to remove the invalid<br>> >> services that will always be down. Those would be the ones on controller-X<br>> >> where the backend is actually on controller-Y. You'll use the cinder-manage<br>> >> command to do that. From the data you supplied, it seems the lvm-1 backend<br>> >> is up on controller03, and the lvm-3 backend on that controller is down.<br>> >> The numbering seems backwards, but I stick with this example. To delete the<br>> >> lvm-3 backend, which is down because that backend is actually on another<br>> >> controller, you'd issue this command:<br>> >><br>> >> $ cinder-manage service remove cinder-volume controller-03@lvm-3<br>> >><br>> >> Don't worry if you accidentally delete a "good" service. The list will be<br>> >> refreshed each time the cinder-volume services refresh their status.<br>> >><br>> >><br>> >>> This is the configuration I have written on the configuration files for<br>> >>> cinder_api _cinder_scheduler and cinder_volume for both servers.<br>> >>><br>> >>> enabled_backends= lvm-1,lvm-3<br>> >>> [lvm-1]<br>> >>> volume_group = cinder-volumes<br>> >>> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver<br>> >>> volume_backend_name = lvm-1<br>> >>> target_helper = lioadm<br>> >>> target_protocol = iscsi<br>> >>> report_discard_supported = true<br>> >>> [lvm-3]<br>> >>> volume_group=cinder-volumes-ssd<br>> >>> volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver<br>> >>> volume_backend_name=lvm-3<br>> >>> target_helper = lioadm<br>> >>> target_protocol = iscsi<br>> >>> report_discard_supported = true<br>> >>><br>> >><br>> >> At a minimum, on each controller you need to remove all references to the<br>> >> backend that's actually on the other controller. The cinder-api and<br>> >> cinder-scheduler services don't need any backend configuration. That's<br>> >> because the backend sections and enabled_backends options are only relevant<br>> >> to the cinder-volume service, and are ignored by the other services.<br>> >><br>> >> Alan<br>> >><br>> >><br>> ><br>> -------------- next part --------------<br>> An HTML attachment was scrubbed...<br>> URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230123/2176a243/attachment.htm><br>> <br>> ------------------------------<br>> <br>> Subject: Digest Footer<br>> <br>> _______________________________________________<br>> openstack-discuss mailing list<br>> openstack-discuss@lists.openstack.org<br>> <br>> <br>> ------------------------------<br>> <br>> End of openstack-discuss Digest, Vol 51, Issue 69<br>> *************************************************</div></body></html>