<html><head><title></title></head><body><!-- rte-version 0.2 9947551637294008b77bce25eb683dac --><div class="rte-style-maintainer" style="white-space: pre-wrap; font-size: small; font-family: 'Courier New', Courier; color: rgb(0, 0, 0);"data-color="global-default" bbg-color="default" data-bb-font-size="medium" bbg-font-size="medium" bbg-font-family="fixed-width">Hey Saverio,<div><br></div><div>We currently implement it by setting images_type=lvm under [libvirt] in nova.conf on hypervisors that have the LVM+RAID0 and then providing different flavors (e1.* versus the default m1.* flavors) that launch instances on a host aggregate for the LVM-hosting hypervisors. I suspect this system is similar to what you use.</div><div><br></div><div>The advantage of it is it was very simple to implement and it guarantees that the volume will be on the same hypervisor as the instance. The disadvantages are probably things you've also experienced:</div><div><br></div><div>- no quota management because Nova considers it local storage (Warren Wang and I had complained about this in separate postings to this ML)</div><div>- can't create additional volumes on the LVM after instance launch because they're not managed by Cinder</div><div><br></div><div>Our users like it because they've figured out these LVM volumes are exempt from quota management, and because it's fast; our most active hypervisors on any given cluster are invariably the LVM ones. So far users have also gotten lucky with not a single RAID 0 failing in the 6 months since we've begun deploying this solution, so there's probably a bit of a perception gap between current and actual expected reliability.</div><div><br></div><div>I have begun thinking about ways of improving this system so as to bring these volumes under the control of Cinder, but have not come up with anything that I think would actually work. We discarded implementing iSCSI because of administrative overhead (who really wants to manage iSCSI?) and because it would negate the automatic forced locality; the whole point of the design was to provide maximum possible block storage speed, and if we have iSCSI traffic going over the storage network and competing with Ceph traffic, you get latency from the network, Ceph performance is degraded, and nobody's happy. I could possibly add cinder-volume to all the LVM hypervisors and register each one as a Cinder AZ, but I'm not sure if Nova would create the volume in the right AZ when scheduling an instance, and it would also break the fourth wall on users knowing what hypervisor is hosting their instance.</div><div><br></div><div class="rte-style-maintainer" data-bb-font-size="medium"bbg-color="default" bbg-font-family="fixed-width" bbg-font-size="medium" style="font-size: small; font-family: 'Courier New', Courier; color: rgb(0, 0, 0);"><div class="bbg-rte-fold-content" data-header="From: zioproto@gmail.com" data-digest="From: zioproto@gmail.com"style=""><div class="bbg-rte-fold-summary">From: zioproto@gmail.com </div><div>Subject: Re: [Openstack-operators] RAID / stripe block storage volumes<br></div></div><blockquote>> In our environments, we offer two types of storage. Tenants can either use<br>> Ceph/RBD and trade speed/latency for reliability and protection against<br>> physical disk failures, or they can launch instances that are realized as<br>> LVs on an LVM VG that we create on top of a RAID 0 spanning all but the OS<br>> disk on the hypervisor. This lets the users elect to go all-in on speed and<br>[..CUT..]<br><br>Hello Ned,<br><br>how do you implement this ? What is like the user experience of having<br>two types of storage ?<br><br>We generally have Ceph/RBD as storage backend, however we have a use<br>case where we need LVM because latency is important.<br><br>To cope with our use case we have different flavors, where setting a<br>flavor-key to a specific flavor you can force the VM to be scheduled<br>to a specific host-aggregate. Then we have a host-aggregate for<br>hypervisors supporting the LVM storage and another host-aggregate for<br>hypervisors running the default Ceph/RBD backend.<br><br>However, let's say the user just creates a Cinder Volume in Horizon.<br>In this case the Volume is created to Ceph/RBD. Is there a solution to<br>support multiple storage backends at the same time and let the user<br>decide in Horizon which one to use ???<br><br>Thanks.<br><br>Saverio<br></blockquote><br></div></div></body></html>