<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto">Hello Can,<div><br></div><div><div data-pm-slice="1 1 []" data-en-clipboard="true" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); -webkit-text-size-adjust: auto;">To fix an RBD (Rados Block Device) IOPS bottleneck on the client side in OpenStack, you can try the following:</div><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); -webkit-text-size-adjust: auto;"><br></div><ol style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); -webkit-text-size-adjust: auto;"><li>Monitor the CPU and memory usage on the client machine to ensure that it has sufficient resources. You can use tools like top or htop to view real-time resource usage.</li><li>Check the network bandwidth between the client and the storage system to ensure that it is not a bottleneck. You can use tools like iperf or tcpdump to measure network performance.</li><li>Review the configuration of the storage system to ensure that it is optimized for the workload. This may include adjusting the number and type of disks used, as well as the RAID level and chunk size.</li><li>Consider using a storage system with a higher IOPS rating to see if it can improve performance. This may involve upgrading to faster disks or using a storage solution with more disks or SSDs.</li><li>Try using a different client machine with more resources (e.g., a machine with a faster CPU and more memory) to see if it can issue more I/O requests.</li><li>Consider using a different network connection between the client and the storage system, such as a faster network card or a direct connection rather than a network switch.</li><li>If you are using Ceph as the underlying storage system, you can try adjusting the Ceph configuration to improve performance. This may include adjusting the number of placement groups, the object size, or the number of OSDs (object storage devices).</li></ol><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); -webkit-text-size-adjust: auto;"><br></div><div style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); -webkit-text-size-adjust: auto;">It's also worth noting that an IOPS bottleneck can also occur on the server side (i.e., within the storage system itself). In this case, you may need to adjust the configuration of the storage system or add more resources (e.g., disks or SSDs) to improve performance.</div><div><br></div><div><br></div><div>BR,</div><div>Kerem Çeliker</div><div>keremceliker.medium.com</div><div>IBM | Red Hat Champion<br><br><div dir="ltr">Sent from my iPhone</div><div dir="ltr"><br><blockquote type="cite">On 28 Dec 2022, at 08:13, openstack-discuss-request@lists.openstack.org wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><span>Send openstack-discuss mailing list submissions to</span><br><span>    openstack-discuss@lists.openstack.org</span><br><span></span><br><span>To subscribe or unsubscribe via the World Wide Web, visit</span><br><span>    https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss</span><br><span></span><br><span>or, via email, send a message with subject or body 'help' to</span><br><span>    openstack-discuss-request@lists.openstack.org</span><br><span></span><br><span>You can reach the person managing the list at</span><br><span>    openstack-discuss-owner@lists.openstack.org</span><br><span></span><br><span>When replying, please edit your Subject line so it is more specific</span><br><span>than "Re: Contents of openstack-discuss digest..."</span><br><span></span><br><span></span><br><span>Today's Topics:</span><br><span></span><br><span>   1. [ops][nova] RBD IOPS bottleneck on client-side (Can ?zyurt)</span><br><span>   2. [keystone][Meeting] Reminder Keystone meeting is cancelled</span><br><span>      today (Dave Wilde)</span><br><span>   3. Nova libvirt/kvm sound device (Zakhar Kirpichenko)</span><br><span>   4. Re: [Tacker][SRBAC] Update regarding implementation of</span><br><span>      project personas in Tacker (Yasufumi Ogawa)</span><br><span>   5. Re: [Tacker][SRBAC] Update regarding implementation of</span><br><span>      project personas in Tacker (manpreet kaur)</span><br><span></span><br><span></span><br><span>----------------------------------------------------------------------</span><br><span></span><br><span>Message: 1</span><br><span>Date: Tue, 27 Dec 2022 15:33:56 +0300</span><br><span>From: Can ?zyurt <acozyurt@gmail.com></span><br><span>To: OpenStack Discuss <openstack-discuss@lists.openstack.org></span><br><span>Subject: [ops][nova] RBD IOPS bottleneck on client-side</span><br><span>Message-ID:</span><br><span>    <CAMf4N71awOfNyBk4FfpM6UCjH4AZWz+QuJwUnOv+itvqvPbTjw@mail.gmail.com></span><br><span>Content-Type: text/plain; charset="UTF-8"</span><br><span></span><br><span>Hi everyone,</span><br><span></span><br><span>I hope you are all doing well. We are trying to pinpoint an IOPS</span><br><span>problem with RBD and decided to ask you for your take on it.</span><br><span></span><br><span>1 control plane</span><br><span>1 compute node</span><br><span>5 storage with 8 SSD disks each</span><br><span>Openstack Stein/Ceph Mimic deployed with kolla-ansible on ubuntu-1804</span><br><span>(kernel 5.4)</span><br><span>isolcpus 4-127 on compute</span><br><span>vcpu_pin_set 4-127 in nova.conf</span><br><span></span><br><span>image_metadatas:</span><br><span>  hw_scsi_model: virtio-scsi</span><br><span>  hw_disk_bus: scsi</span><br><span>flavor_metadatas:</span><br><span>  hw:cpu_policy: dedicated</span><br><span></span><br><span>What we have tested:</span><br><span>fio --directory=. --ioengine=libaio --direct=1</span><br><span>--name=benchmark_random_read_write --filename=test_rand --bs=4k</span><br><span>--iodepth=32 --size=1G --readwrite=randrw --rwmixread=50 --time_based</span><br><span>--runtime=300s --numjobs=16</span><br><span></span><br><span>1. First we run the fio test above on a guest VM, we see average 5K/5K</span><br><span>read/write IOPS consistently. What we realize is that during the test,</span><br><span>one single core on compute host is used at max, which is the first of</span><br><span>the pinned cpus of the guest. 'top -Hp $qemupid' shows that some</span><br><span>threads (notably tp_librbd) share the very same core throughout the</span><br><span>test. (also emulatorpin set = vcpupin set as expected)</span><br><span>2. We remove isolcpus and every other configuration stays the same.</span><br><span>Now fio tests now show 11K/11K read/write IOPS. No bottlenecked single</span><br><span>cpu on the host, observed threads seem to visit all emulatorpins.</span><br><span>3. We bring isolcpus back and redeploy the cluster with Train/Nautilus</span><br><span>on ubuntu-1804. Observations are identical to #1.</span><br><span>4. We tried replacing vcpu_pin_set to cpu_shared_set and</span><br><span>cpu_dedicated_set to be able to pin emulator cpuset to 0-4 to no</span><br><span>avail. Multiple guests on a host can easily deplete resources and IOPS</span><br><span>drops.</span><br><span>5. Isolcpus are still in place and we deploy Ussuri with kolla-ansible</span><br><span>and Train (to limit the moving parts) with ceph-ansible both on</span><br><span>ubuntu-1804. Now we see 7K/7K read/write IOPS.</span><br><span>6. We destroy only the compute node and boot it with ubuntu-2004 with</span><br><span>isolcpus set. Add it back to the existing cluster and fio shows</span><br><span>slightly above 10K/10K read/write IOPS.</span><br><span></span><br><span></span><br><span>What we think happens:</span><br><span></span><br><span>1. Since isolcpus disables scheduling between given cpus, qemu process</span><br><span>and its threads are stuck at the same cpu which created the</span><br><span>bottleneck. They should be runnable on any given emulatorpin cpus.</span><br><span>2. Ussuri is more performant despite isolcpus, with the improvements</span><br><span>made over time.</span><br><span>3. Ubuntu-2004 is more performant despite isolcpus, with the</span><br><span>improvements made over time in the kernel.</span><br><span></span><br><span>Now the questions are:</span><br><span></span><br><span>1. What else are we missing here?</span><br><span>2. Are any of those assumptions false?</span><br><span>3. If all true, what can we do to solve this issue given that we</span><br><span>cannot upgrade openstack nor ceph on production overnight?</span><br><span>4. Has anyone dealt with this issue before?</span><br><span></span><br><span>We welcome any opinion and suggestions at this point as we need to</span><br><span>make sure that we are on the right path regarding the problem and</span><br><span>upgrade is not the only solution. Thanks in advance.</span><br><span></span><br><span></span><br><span></span><br><span>------------------------------</span><br><span></span><br><span>Message: 2</span><br><span>Date: Tue, 27 Dec 2022 07:00:30 -0800</span><br><span>From: Dave Wilde <dwilde@redhat.com></span><br><span>To: openstack-discuss@lists.openstack.org</span><br><span>Subject: [keystone][Meeting] Reminder Keystone meeting is cancelled</span><br><span>    today</span><br><span>Message-ID:</span><br><span>    <CAJEkJ+qYCHZayR-t9w4e=ZB846CZSrggcqdYKE85fWkj2XAo5w@mail.gmail.com></span><br><span>Content-Type: text/plain; charset="utf-8"</span><br><span></span><br><span>Just a quick reminder that there won?t be the keystone weekly meeting</span><br><span>today. We?ll resume our regulararly scheduled programming on 03-Jan-2023.</span><br><span>Please update the agenda if you have anything you?d like to discuess. The</span><br><span>reviewathon is also cancelled this week, to be resumed on 06-Jan-2023.</span><br><span></span><br><span>/Dave</span><br><span>-------------- next part --------------</span><br><span>An HTML attachment was scrubbed...</span><br><span>URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20221227/6554e318/attachment-0001.htm></span><br><span></span><br><span>------------------------------</span><br><span></span><br><span>Message: 3</span><br><span>Date: Tue, 27 Dec 2022 17:40:47 +0200</span><br><span>From: Zakhar Kirpichenko <zakhar@gmail.com></span><br><span>To: openstack-discuss@lists.openstack.org</span><br><span>Subject: Nova libvirt/kvm sound device</span><br><span>Message-ID:</span><br><span>    <CAEw-OTX-3DR6YymKrp1B4-SRSi2DPSD=kr-OwHVtMB031C_FzA@mail.gmail.com></span><br><span>Content-Type: text/plain; charset="utf-8"</span><br><span></span><br><span>Hi!</span><br><span></span><br><span>I'd like to have the following configuration added to every guest on a</span><br><span>specific host managed by Nova and libvirt/kvm:</span><br><span></span><br><span>    <sound model='bla'></span><br><span>      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b'</span><br><span>function='0x0'/></span><br><span>    </sound></span><br><span></span><br><span>When I add the device manually to instance xml, it works as intended but</span><br><span>the instance configuration gets overwritten on instance stop/start or hard</span><br><span>reboot via Nova.</span><br><span></span><br><span>What is the currently supported / proper way to add a virtual sound device</span><br><span>without having to modify libvirt or Nova code? I would appreciate any</span><br><span>advice.</span><br><span></span><br><span>Best regards,</span><br><span>Zakhar</span><br><span>-------------- next part --------------</span><br><span>An HTML attachment was scrubbed...</span><br><span>URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20221227/90256d0f/attachment-0001.htm></span><br><span></span><br><span>------------------------------</span><br><span></span><br><span>Message: 4</span><br><span>Date: Wed, 28 Dec 2022 04:42:16 +0900</span><br><span>From: Yasufumi Ogawa <yasufum.o@gmail.com></span><br><span>To: manpreet kaur <kaurmanpreet2620@gmail.com></span><br><span>Cc: openstack-discuss <openstack-discuss@lists.openstack.org></span><br><span>Subject: Re: [Tacker][SRBAC] Update regarding implementation of</span><br><span>    project personas in Tacker</span><br><span>Message-ID: <5e4a7010-ffc7-83d9-e74b-ed6aae76c15c@gmail.com></span><br><span>Content-Type: text/plain; charset=UTF-8; format=flowed</span><br><span></span><br><span>Hi Manpreet-san,</span><br><span></span><br><span>Thanks for your notice. I've started to review and understood this </span><br><span>change is considered backward compatibility from suggestions on the </span><br><span>etherpad. Although it's LGTM, I'd like to ask if Yuta has any comment </span><br><span>for the proposal because he has also propose a policy management feature </span><br><span>in this release.</span><br><span></span><br><span>For the deadline, let us discuss again to reschedule it if we cannot </span><br><span>merge the deadline.</span><br><span></span><br><span>Thanks,</span><br><span>Yasufumi</span><br><span></span><br><span>On 2022/12/26 14:07, manpreet kaur wrote:</span><br><blockquote type="cite"><span>Hi Ogawa san and Tacker team,</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>This mailer is regarding the SRBAC implementation happening in Tacker.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>In the Tacker release 2023.1 virtual PTG [1], it was decided by the </span><br></blockquote><blockquote type="cite"><span>Tacker community to partially implement the project personas </span><br></blockquote><blockquote type="cite"><span>(project-reader role) in the current release. And in upcoming releases, </span><br></blockquote><blockquote type="cite"><span>we will implement the remaining project-member role.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>To address the above requirement, I have prepared a specification [2] </span><br></blockquote><blockquote type="cite"><span>and pushed the same in Gerrit for community review.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Ghanshyam san reviewed the specification and shared TC's opinion and </span><br></blockquote><blockquote type="cite"><span>suggestion to implement both the project-reader and project-member roles.</span><br></blockquote><blockquote type="cite"><span>The complete persona implementation will depreciate the 'owner' rule, </span><br></blockquote><blockquote type="cite"><span>and?help in?restricting any other role to accessing project-based resources.</span><br></blockquote><blockquote type="cite"><span>Additionally, intact legacy admin (current admin), works?in?the same way </span><br></blockquote><blockquote type="cite"><span>so that we do not break things and introduce the project personas which </span><br></blockquote><blockquote type="cite"><span>should be additional things to be available for operators to adopt.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Current Status: Incorporated the new?requirement and uploaded a new </span><br></blockquote><blockquote type="cite"><span>patch set to address the review comment.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Note: The Tacker spec freeze date is 28th Dec 2022, there might be some </span><br></blockquote><blockquote type="cite"><span>delay in merging the specification in shared timelines.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>[1] https://etherpad.opendev.org/p/tacker-antelope-ptg#L186 </span><br></blockquote><blockquote type="cite"><span><https://etherpad.opendev.org/p/tacker-antelope-ptg#L186></span><br></blockquote><blockquote type="cite"><span>[2] https://review.opendev.org/c/openstack/tacker-specs/+/866956 </span><br></blockquote><blockquote type="cite"><span><https://review.opendev.org/c/openstack/tacker-specs/+/866956></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Thanks & Regards,</span><br></blockquote><blockquote type="cite"><span>Manpreet Kaur</span><br></blockquote><span></span><br><span></span><br><span></span><br><span>------------------------------</span><br><span></span><br><span>Message: 5</span><br><span>Date: Wed, 28 Dec 2022 10:35:32 +0530</span><br><span>From: manpreet kaur <kaurmanpreet2620@gmail.com></span><br><span>To: Yasufumi Ogawa <yasufum.o@gmail.com></span><br><span>Cc: openstack-discuss <openstack-discuss@lists.openstack.org></span><br><span>Subject: Re: [Tacker][SRBAC] Update regarding implementation of</span><br><span>    project personas in Tacker</span><br><span>Message-ID:</span><br><span>    <CAFQfZj9uwyZWGfST14S8hiKH_pNjn_B1GvwaXEmfa3LRzXhJbA@mail.gmail.com></span><br><span>Content-Type: text/plain; charset="utf-8"</span><br><span></span><br><span>Hi Ogawa san,</span><br><span></span><br><span>Thanks for accepting the new RBAC proposal, please find the latest</span><br><span>patch-set 7 [1] as the final version.</span><br><span>Would try to merge the specification within the proposed timelines.</span><br><span></span><br><span>@Ghanshyam san,</span><br><span>Thanks for adding clarity to the proposed changes and for a quick review.</span><br><span></span><br><span>[1] https://review.opendev.org/c/openstack/tacker-specs/+/866956</span><br><span></span><br><span>Best Regards,</span><br><span>Manpreet Kaur</span><br><span></span><br><span>On Wed, Dec 28, 2022 at 1:12 AM Yasufumi Ogawa <yasufum.o@gmail.com> wrote:</span><br><span></span><br><blockquote type="cite"><span>Hi Manpreet-san,</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Thanks for your notice. I've started to review and understood this</span><br></blockquote><blockquote type="cite"><span>change is considered backward compatibility from suggestions on the</span><br></blockquote><blockquote type="cite"><span>etherpad. Although it's LGTM, I'd like to ask if Yuta has any comment</span><br></blockquote><blockquote type="cite"><span>for the proposal because he has also propose a policy management feature</span><br></blockquote><blockquote type="cite"><span>in this release.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>For the deadline, let us discuss again to reschedule it if we cannot</span><br></blockquote><blockquote type="cite"><span>merge the deadline.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Thanks,</span><br></blockquote><blockquote type="cite"><span>Yasufumi</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>On 2022/12/26 14:07, manpreet kaur wrote:</span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>Hi Ogawa san and Tacker team,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>This mailer is regarding the SRBAC implementation happening in Tacker.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>In the Tacker release 2023.1 virtual PTG [1], it was decided by the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Tacker community to partially implement the project personas</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>(project-reader role) in the current release. And in upcoming releases,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>we will implement the remaining project-member role.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>To address the above requirement, I have prepared a specification [2]</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>and pushed the same in Gerrit for community review.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Ghanshyam san reviewed the specification and shared TC's opinion and</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>suggestion to implement both the project-reader and project-member roles.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>The complete persona implementation will depreciate the 'owner' rule,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>and help in restricting any other role to accessing project-based</span><br></blockquote></blockquote><blockquote type="cite"><span>resources.</span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>Additionally, intact legacy admin (current admin), works in the same way</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>so that we do not break things and introduce the project personas which</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>should be additional things to be available for operators to adopt.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Current Status: Incorporated the new requirement and uploaded a new</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>patch set to address the review comment.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Note: The Tacker spec freeze date is 28th Dec 2022, there might be some</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>delay in merging the specification in shared timelines.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>[1] https://etherpad.opendev.org/p/tacker-antelope-ptg#L186</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><https://etherpad.opendev.org/p/tacker-antelope-ptg#L186></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>[2] https://review.opendev.org/c/openstack/tacker-specs/+/866956</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><https://review.opendev.org/c/openstack/tacker-specs/+/866956></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Thanks & Regards,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Manpreet Kaur</span><br></blockquote></blockquote><blockquote type="cite"><span></span><br></blockquote><span>-------------- next part --------------</span><br><span>An HTML attachment was scrubbed...</span><br><span>URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20221228/cd9d61c5/attachment.htm></span><br><span></span><br><span>------------------------------</span><br><span></span><br><span>Subject: Digest Footer</span><br><span></span><br><span>_______________________________________________</span><br><span>openstack-discuss mailing list</span><br><span>openstack-discuss@lists.openstack.org</span><br><span></span><br><span></span><br><span>------------------------------</span><br><span></span><br><span>End of openstack-discuss Digest, Vol 50, Issue 63</span><br><span>*************************************************</span><br></div></blockquote></div></div></body></html>