<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">RDO PackStack<div class=""><br class=""></div><div class=""><a href="https://www.rdoproject.org/install/packstack/" class="">https://www.rdoproject.org/install/packstack/</a></div><div class=""><br class=""></div><div class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Mar 20, 2018, at 9:35 PM, <a href="mailto:remo@italy1.com" class="">remo@italy1.com</a> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">How did you install OpenStack? <br class=""><br class=""> dal mio iPhone X <br class=""><br class=""><blockquote type="cite" class="">Il giorno 20 mar 2018, alle ore 18:29, Father Vlasie <<a href="mailto:fv@spots.school" class="">fv@spots.school</a>> ha scritto:<br class=""><br class="">[root@plato ~]# pcs status<br class="">-bash: pcs: command not found<br class=""><br class=""><br class=""><blockquote type="cite" class="">On Mar 20, 2018, at 6:28 PM, Remo Mattei <<a href="mailto:Remo@italy1.com" class="">Remo@italy1.com</a>> wrote:<br class=""><br class="">Looks like your pacemaker is not running check that out! <br class=""><br class="">sudo pcs status <br class=""><br class=""><blockquote type="cite" class="">On Mar 20, 2018, at 6:24 PM, Father Vlasie <<a href="mailto:fv@spots.school" class="">fv@spots.school</a>> wrote:<br class=""><br class="">Your help is much appreciated! Thank you.<br class=""><br class="">The cinder service is running on the controller node and it is using a disk partition not the loopback device, I did change the default configuration during install with PackStack.<br class=""><br class="">[root@plato ~]# pvs<br class="">PV VG Fmt Attr PSize PFree <br class="">/dev/vda3 centos lvm2 a-- 1022.80g 4.00m<br class="">/dev/vdb1 cinder-volumes lvm2 a-- <10.00t <511.85g<br class=""><br class="">[root@plato ~]# lvchange -a y volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5<br class="">Volume group "volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5" not found<br class="">Cannot process volume group volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5<br class=""><br class="">[root@plato ~]# lvchange -a y cinder-volumes<br class="">Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.<br class="">Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.<br class="">Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.<br class="">Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.<br class="">Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.<br class="">Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.<br class="">Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.<br class=""><br class=""><br class=""><br class=""><br class=""><blockquote type="cite" class="">On Mar 20, 2018, at 6:05 PM, Vagner Farias <<a href="mailto:vfarias@redhat.com" class="">vfarias@redhat.com</a>> wrote:<br class=""><br class="">Will "lvchange -a y lvname" activate it?<br class=""><br class="">If not, considering that you're using Pike on Centos, there's a chance you may be using the cinder-volumes backed by a loopback file. I guess both packstack & tripleo will configure this by default if you don't change the configuration. At least tripleo won't configure this loopback device to be activated automatically on boot. An option would be to include lines like the following in /etc/rc.d/rc.local:<br class=""><br class="">losetup /dev/loop0 /var/lib/cinder/cinder-volumes<br class="">vgscan<br class=""><br class="">Last but not least, if this is actually the case, I wouldn't recommend using loopback devices for LVM SCSI driver. In fact, if you can use any other driver capable of delivering HA, it'd be better (unless this is some POC or an environment without tight SLAs). <br class=""><br class="">Vagner Farias<br class=""><br class=""><br class="">Em ter, 20 de mar de 2018 21:24, Father Vlasie <<a href="mailto:fv@spots.school" class="">fv@spots.school</a>> escreveu:<br class="">Here is the output of lvdisplay:<br class=""><br class="">[root@plato ~]# lvdisplay<br class="">--- Logical volume ---<br class="">LV Name cinder-volumes-pool<br class="">VG Name cinder-volumes<br class="">LV UUID PEkGKb-fhAc-CJD2-uDDA-k911-SIX9-1uyvFo<br class="">LV Write Access read/write<br class="">LV Creation host, time plato, 2018-02-01 13:33:51 -0800<br class="">LV Pool metadata cinder-volumes-pool_tmeta<br class="">LV Pool data cinder-volumes-pool_tdata<br class="">LV Status NOT available<br class="">LV Size 9.50 TiB<br class="">Current LE 2490368<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/cinder-volumes/volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e<br class="">LV Name volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e<br class="">VG Name cinder-volumes<br class="">LV UUID C2o7UD-uqFp-3L3r-F0Ys-etjp-QBJr-idBhb0<br class="">LV Write Access read/write<br class="">LV Creation host, time plato, 2018-02-02 10:18:41 -0800<br class="">LV Pool name cinder-volumes-pool<br class="">LV Status NOT available<br class="">LV Size 1.00 GiB<br class="">Current LE 256<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/cinder-volumes/volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3<br class="">LV Name volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3<br class="">VG Name cinder-volumes<br class="">LV UUID qisf80-j4XV-PpFy-f7yt-ZpJS-99v0-m03Ql4<br class="">LV Write Access read/write<br class="">LV Creation host, time plato, 2018-02-02 10:26:46 -0800<br class="">LV Pool name cinder-volumes-pool<br class="">LV Status NOT available<br class="">LV Size 1.00 GiB<br class="">Current LE 256<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/cinder-volumes/volume-ee107488-2559-4116-aa7b-0da02fd5f693<br class="">LV Name volume-ee107488-2559-4116-aa7b-0da02fd5f693<br class="">VG Name cinder-volumes<br class="">LV UUID FS9Y2o-HYe2-HK03-yM0Z-P7GO-kAzD-cOYNTb<br class="">LV Write Access read/write<br class="">LV Creation host, time plato.spots.onsite, 2018-02-12 10:28:57 -0800<br class="">LV Pool name cinder-volumes-pool<br class="">LV Status NOT available<br class="">LV Size 40.00 GiB<br class="">Current LE 10240<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/cinder-volumes/volume-d6f0260d-21b5-43e7-afe5-84e0502fa734<br class="">LV Name volume-d6f0260d-21b5-43e7-afe5-84e0502fa734<br class="">VG Name cinder-volumes<br class="">LV UUID b6pX01-mOEH-3j3K-32NJ-OHsz-UMQe-y10vSM<br class="">LV Write Access read/write<br class="">LV Creation host, time plato.spots.onsite, 2018-02-14 14:24:41 -0800<br class="">LV Pool name cinder-volumes-pool<br class="">LV Status NOT available<br class="">LV Size 40.00 GiB<br class="">Current LE 10240<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/cinder-volumes/volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147<br class="">LV Name volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147<br class="">VG Name cinder-volumes<br class="">LV UUID T07JAE-3CNU-CpwN-BUEr-aAJG-VxP5-1qFYZz<br class="">LV Write Access read/write<br class="">LV Creation host, time plato.spots.onsite, 2018-03-12 10:33:24 -0700<br class="">LV Pool name cinder-volumes-pool<br class="">LV Status NOT available<br class="">LV Size 4.00 GiB<br class="">Current LE 1024<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/cinder-volumes/volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5<br class="">LV Name volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5<br class="">VG Name cinder-volumes<br class="">LV UUID IB0q1n-NnkR-tx5w-BbBu-LamG-jCbQ-mYXWyC<br class="">LV Write Access read/write<br class="">LV Creation host, time plato.spots.onsite, 2018-03-14 09:52:14 -0700<br class="">LV Pool name cinder-volumes-pool<br class="">LV Status NOT available<br class="">LV Size 40.00 GiB<br class="">Current LE 10240<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/centos/root<br class="">LV Name root<br class="">VG Name centos<br class="">LV UUID nawE4n-dOHs-VsNH-f9hL-te05-mvGC-WoFQzv<br class="">LV Write Access read/write<br class="">LV Creation host, time localhost, 2018-01-22 09:50:38 -0800<br class="">LV Status available<br class=""># open 1<br class="">LV Size 50.00 GiB<br class="">Current LE 12800<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class="">- currently set to 8192<br class="">Block device 253:0<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/centos/swap<br class="">LV Name swap<br class="">VG Name centos<br class="">LV UUID Vvlni4-nwTl-ORwW-Gg8b-5y4h-kXJ5-T67cKU<br class="">LV Write Access read/write<br class="">LV Creation host, time localhost, 2018-01-22 09:50:38 -0800<br class="">LV Status available<br class=""># open 2<br class="">LV Size 8.12 GiB<br class="">Current LE 2080<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class="">- currently set to 8192<br class="">Block device 253:1<br class=""><br class="">--- Logical volume ---<br class="">LV Path /dev/centos/home<br class="">LV Name home<br class="">VG Name centos<br class="">LV UUID lCXJ7v-jeOC-DFKI-unXa-HUKx-9DXp-nmzSMg<br class="">LV Write Access read/write<br class="">LV Creation host, time localhost, 2018-01-22 09:50:39 -0800<br class="">LV Status available<br class=""># open 1<br class="">LV Size 964.67 GiB<br class="">Current LE 246956<br class="">Segments 1<br class="">Allocation inherit<br class="">Read ahead sectors auto<br class="">- currently set to 8192<br class="">Block device 253:2<br class=""><br class=""><br class=""><blockquote type="cite" class="">On Mar 20, 2018, at 4:51 PM, Remo Mattei <<a href="mailto:Remo@Italy1.com" class="">Remo@Italy1.com</a>> wrote:<br class=""><br class="">I think you need to provide a bit of additional info. Did you look at the logs? What version of os are you running? Etc.<br class=""><br class="">Inviato da iPhone<br class=""><br class=""><blockquote type="cite" class="">Il giorno 20 mar 2018, alle ore 16:15, Father Vlasie <<a href="mailto:fv@spots.school" class="">fv@spots.school</a>> ha scritto:<br class=""><br class="">Hello everyone,<br class=""><br class="">I am in need of help with my Cinder volumes which have all become unavailable.<br class=""><br class="">Is there anyone who would be willing to log in to my system and have a look?<br class=""><br class="">My cinder volumes are listed as "NOT available" and my attempts to mount them have been in vain. I have tried: vgchange -a y<br class=""><br class="">with result showing as: 0 logical volume(s) in volume group "cinder-volumes" now active<br class=""><br class="">I am a bit desperate because some of the data is critical and, I am ashamed to say, I do not have a backup.<br class=""><br class="">Any help or suggestions would be very much appreciated.<br class=""><br class="">FV<br class="">_______________________________________________<br class="">Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br class="">Post to : <a href="mailto:openstack@lists.openstack.org" class="">openstack@lists.openstack.org</a><br class="">Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br class=""></blockquote><br class=""></blockquote><br class=""><br class="">_______________________________________________<br class="">Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br class="">Post to : <a href="mailto:openstack@lists.openstack.org" class="">openstack@lists.openstack.org</a><br class="">Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br class=""></blockquote><br class=""></blockquote><br class=""></blockquote><br class=""></blockquote></div></div></blockquote></div><br class=""></div></body></html>