[Openstack] [Cinder] Help needed recovering cinder volumes
Father Vlasie
fv at spots.school
Thu Mar 22 00:28:44 UTC 2018
[root at plato ~]# lvdisplay
--- Logical volume ---
LV Name cinder-volumes-pool
VG Name cinder-volumes
LV UUID PEkGKb-fhAc-CJD2-uDDA-k911-SIX9-1uyvFo
LV Write Access read/write
LV Creation host, time plato, 2018-02-01 13:33:51 -0800
LV Pool metadata cinder-volumes-pool_tmeta
LV Pool data cinder-volumes-pool_tdata
LV Status NOT available
LV Size 9.50 TiB
Current LE 2490368
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
LV Name volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
VG Name cinder-volumes
LV UUID C2o7UD-uqFp-3L3r-F0Ys-etjp-QBJr-idBhb0
LV Write Access read/write
LV Creation host, time plato, 2018-02-02 10:18:41 -0800
LV Pool name cinder-volumes-pool
LV Status NOT available
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
LV Name volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
VG Name cinder-volumes
LV UUID qisf80-j4XV-PpFy-f7yt-ZpJS-99v0-m03Ql4
LV Write Access read/write
LV Creation host, time plato, 2018-02-02 10:26:46 -0800
LV Pool name cinder-volumes-pool
LV Status NOT available
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-ee107488-2559-4116-aa7b-0da02fd5f693
LV Name volume-ee107488-2559-4116-aa7b-0da02fd5f693
VG Name cinder-volumes
LV UUID FS9Y2o-HYe2-HK03-yM0Z-P7GO-kAzD-cOYNTb
LV Write Access read/write
LV Creation host, time plato.spots.onsite, 2018-02-12 10:28:57 -0800
LV Pool name cinder-volumes-pool
LV Status NOT available
LV Size 40.00 GiB
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
LV Name volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
VG Name cinder-volumes
LV UUID b6pX01-mOEH-3j3K-32NJ-OHsz-UMQe-y10vSM
LV Write Access read/write
LV Creation host, time plato.spots.onsite, 2018-02-14 14:24:41 -0800
LV Pool name cinder-volumes-pool
LV Status NOT available
LV Size 40.00 GiB
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
LV Name volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
VG Name cinder-volumes
LV UUID T07JAE-3CNU-CpwN-BUEr-aAJG-VxP5-1qFYZz
LV Write Access read/write
LV Creation host, time plato.spots.onsite, 2018-03-12 10:33:24 -0700
LV Pool name cinder-volumes-pool
LV Status NOT available
LV Size 4.00 GiB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/cinder-volumes/volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
LV Name volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
VG Name cinder-volumes
LV UUID IB0q1n-NnkR-tx5w-BbBu-LamG-jCbQ-mYXWyC
LV Write Access read/write
LV Creation host, time plato.spots.onsite, 2018-03-14 09:52:14 -0700
LV Pool name cinder-volumes-pool
LV Status NOT available
LV Size 40.00 GiB
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID nawE4n-dOHs-VsNH-f9hL-te05-mvGC-WoFQzv
LV Write Access read/write
LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
LV Status available
# open 1
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID Vvlni4-nwTl-ORwW-Gg8b-5y4h-kXJ5-T67cKU
LV Write Access read/write
LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
LV Status available
# open 2
LV Size 8.12 GiB
Current LE 2080
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/centos/home
LV Name home
VG Name centos
LV UUID lCXJ7v-jeOC-DFKI-unXa-HUKx-9DXp-nmzSMg
LV Write Access read/write
LV Creation host, time localhost, 2018-01-22 09:50:39 -0800
LV Status available
# open 1
LV Size 964.67 GiB
Current LE 246956
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
> On Mar 21, 2018, at 5:25 PM, remo at italy1.com wrote:
>
> Can you do an lvdisplay
>
> dal mio iPhone X
>
> Il giorno 21 mar 2018, alle ore 17:23, Father Vlasie <fv at spots.school <mailto:fv at spots.school>> ha scritto:
>
>> About 12TB altogether.
>>
>>> On Mar 21, 2018, at 5:21 PM, remo at italy1.com <mailto:remo at italy1.com> wrote:
>>>
>>> How much space do you have?
>>>
>>> dal mio iPhone X
>>>
>>> Il giorno 21 mar 2018, alle ore 17:10, Father Vlasie <fv at spots.school <mailto:fv at spots.school>> ha scritto:
>>>
>>>> Yes, I agree, it does seem to be an LVM issue rather than cinder. I will pursue that course.
>>>>
>>>> Thank you all for your help, it is fantastic having a support mailing list like this!
>>>>
>>>> FV
>>>>
>>>>> On Mar 21, 2018, at 4:45 AM, Vagner Farias <vfarias at redhat.com <mailto:vfarias at redhat.com>> wrote:
>>>>>
>>>>> It seems your LVM thin pool metadata is corrupt. I'm not familiar with this issue and can't guide you on how to fix it. Although this could have been caused by cinder, it's an LVM issue and if you don't get more answers here you may try some Linux related forum.
>>>>>
>>>>> On a quick search on "lvm2 thinpool metadata mismatch" I could find several possible causes and solution paths.
>>>>>
>>>>> I hope that helps.
>>>>>
>>>>> Vagner Farias
>>>>>
>>>>>
>>>>> Em ter, 20 de mar de 2018 22:29, Father Vlasie <fv at spots.school <mailto:fv at spots.school>> escreveu:
>>>>> Your help is much appreciated! Thank you.
>>>>>
>>>>> The cinder service is running on the controller node and it is using a disk partition not the loopback device, I did change the default configuration during install with PackStack.
>>>>>
>>>>> [root at plato ~]# pvs
>>>>> PV VG Fmt Attr PSize PFree
>>>>> /dev/vda3 centos lvm2 a-- 1022.80g 4.00m
>>>>> /dev/vdb1 cinder-volumes lvm2 a-- <10.00t <511.85g
>>>>>
>>>>> [root at plato ~]# lvchange -a y volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>> Volume group "volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5" not found
>>>>> Cannot process volume group volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>>
>>>>> [root at plato ~]# lvchange -a y cinder-volumes
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.
>>>>> Thin pool cinder--volumes-cinder--volumes--pool-tpool (253:5) transaction_id is 0, while expected 72.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> > On Mar 20, 2018, at 6:05 PM, Vagner Farias <vfarias at redhat.com <mailto:vfarias at redhat.com>> wrote:
>>>>> >
>>>>> > Will "lvchange -a y lvname" activate it?
>>>>> >
>>>>> > If not, considering that you're using Pike on Centos, there's a chance you may be using the cinder-volumes backed by a loopback file. I guess both packstack & tripleo will configure this by default if you don't change the configuration. At least tripleo won't configure this loopback device to be activated automatically on boot. An option would be to include lines like the following in /etc/rc.d/rc.local:
>>>>> >
>>>>> > losetup /dev/loop0 /var/lib/cinder/cinder-volumes
>>>>> > vgscan
>>>>> >
>>>>> > Last but not least, if this is actually the case, I wouldn't recommend using loopback devices for LVM SCSI driver. In fact, if you can use any other driver capable of delivering HA, it'd be better (unless this is some POC or an environment without tight SLAs).
>>>>> >
>>>>> > Vagner Farias
>>>>> >
>>>>> >
>>>>> > Em ter, 20 de mar de 2018 21:24, Father Vlasie <fv at spots.school <mailto:fv at spots.school>> escreveu:
>>>>> > Here is the output of lvdisplay:
>>>>> >
>>>>> > [root at plato ~]# lvdisplay
>>>>> > --- Logical volume ---
>>>>> > LV Name cinder-volumes-pool
>>>>> > VG Name cinder-volumes
>>>>> > LV UUID PEkGKb-fhAc-CJD2-uDDA-k911-SIX9-1uyvFo
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time plato, 2018-02-01 13:33:51 -0800
>>>>> > LV Pool metadata cinder-volumes-pool_tmeta
>>>>> > LV Pool data cinder-volumes-pool_tdata
>>>>> > LV Status NOT available
>>>>> > LV Size 9.50 TiB
>>>>> > Current LE 2490368
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/cinder-volumes/volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>>>> > LV Name volume-8f4a5fff-749f-47fe-976f-6157f58a4d9e
>>>>> > VG Name cinder-volumes
>>>>> > LV UUID C2o7UD-uqFp-3L3r-F0Ys-etjp-QBJr-idBhb0
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time plato, 2018-02-02 10:18:41 -0800
>>>>> > LV Pool name cinder-volumes-pool
>>>>> > LV Status NOT available
>>>>> > LV Size 1.00 GiB
>>>>> > Current LE 256
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/cinder-volumes/volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>>>> > LV Name volume-6ad82e98-c8e2-4837-bffd-079cf76afbe3
>>>>> > VG Name cinder-volumes
>>>>> > LV UUID qisf80-j4XV-PpFy-f7yt-ZpJS-99v0-m03Ql4
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time plato, 2018-02-02 10:26:46 -0800
>>>>> > LV Pool name cinder-volumes-pool
>>>>> > LV Status NOT available
>>>>> > LV Size 1.00 GiB
>>>>> > Current LE 256
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/cinder-volumes/volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>>>> > LV Name volume-ee107488-2559-4116-aa7b-0da02fd5f693
>>>>> > VG Name cinder-volumes
>>>>> > LV UUID FS9Y2o-HYe2-HK03-yM0Z-P7GO-kAzD-cOYNTb
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time plato.spots.onsite, 2018-02-12 10:28:57 -0800
>>>>> > LV Pool name cinder-volumes-pool
>>>>> > LV Status NOT available
>>>>> > LV Size 40.00 GiB
>>>>> > Current LE 10240
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/cinder-volumes/volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>>>> > LV Name volume-d6f0260d-21b5-43e7-afe5-84e0502fa734
>>>>> > VG Name cinder-volumes
>>>>> > LV UUID b6pX01-mOEH-3j3K-32NJ-OHsz-UMQe-y10vSM
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time plato.spots.onsite, 2018-02-14 14:24:41 -0800
>>>>> > LV Pool name cinder-volumes-pool
>>>>> > LV Status NOT available
>>>>> > LV Size 40.00 GiB
>>>>> > Current LE 10240
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/cinder-volumes/volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>>>> > LV Name volume-a7bd0bc8-8cbc-4053-bdc2-2eb9bfb0f147
>>>>> > VG Name cinder-volumes
>>>>> > LV UUID T07JAE-3CNU-CpwN-BUEr-aAJG-VxP5-1qFYZz
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time plato.spots.onsite, 2018-03-12 10:33:24 -0700
>>>>> > LV Pool name cinder-volumes-pool
>>>>> > LV Status NOT available
>>>>> > LV Size 4.00 GiB
>>>>> > Current LE 1024
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/cinder-volumes/volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>> > LV Name volume-29fa3b6d-1cbf-40db-82bb-1756c6fac9a5
>>>>> > VG Name cinder-volumes
>>>>> > LV UUID IB0q1n-NnkR-tx5w-BbBu-LamG-jCbQ-mYXWyC
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time plato.spots.onsite, 2018-03-14 09:52:14 -0700
>>>>> > LV Pool name cinder-volumes-pool
>>>>> > LV Status NOT available
>>>>> > LV Size 40.00 GiB
>>>>> > Current LE 10240
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/centos/root
>>>>> > LV Name root
>>>>> > VG Name centos
>>>>> > LV UUID nawE4n-dOHs-VsNH-f9hL-te05-mvGC-WoFQzv
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>>>> > LV Status available
>>>>> > # open 1
>>>>> > LV Size 50.00 GiB
>>>>> > Current LE 12800
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> > - currently set to 8192
>>>>> > Block device 253:0
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/centos/swap
>>>>> > LV Name swap
>>>>> > VG Name centos
>>>>> > LV UUID Vvlni4-nwTl-ORwW-Gg8b-5y4h-kXJ5-T67cKU
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time localhost, 2018-01-22 09:50:38 -0800
>>>>> > LV Status available
>>>>> > # open 2
>>>>> > LV Size 8.12 GiB
>>>>> > Current LE 2080
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> > - currently set to 8192
>>>>> > Block device 253:1
>>>>> >
>>>>> > --- Logical volume ---
>>>>> > LV Path /dev/centos/home
>>>>> > LV Name home
>>>>> > VG Name centos
>>>>> > LV UUID lCXJ7v-jeOC-DFKI-unXa-HUKx-9DXp-nmzSMg
>>>>> > LV Write Access read/write
>>>>> > LV Creation host, time localhost, 2018-01-22 09:50:39 -0800
>>>>> > LV Status available
>>>>> > # open 1
>>>>> > LV Size 964.67 GiB
>>>>> > Current LE 246956
>>>>> > Segments 1
>>>>> > Allocation inherit
>>>>> > Read ahead sectors auto
>>>>> > - currently set to 8192
>>>>> > Block device 253:2
>>>>> >
>>>>> >
>>>>> > > On Mar 20, 2018, at 4:51 PM, Remo Mattei <Remo at Italy1.com <mailto:Remo at Italy1.com>> wrote:
>>>>> > >
>>>>> > > I think you need to provide a bit of additional info. Did you look at the logs? What version of os are you running? Etc.
>>>>> > >
>>>>> > > Inviato da iPhone
>>>>> > >
>>>>> > >> Il giorno 20 mar 2018, alle ore 16:15, Father Vlasie <fv at spots.school <mailto:fv at spots.school>> ha scritto:
>>>>> > >>
>>>>> > >> Hello everyone,
>>>>> > >>
>>>>> > >> I am in need of help with my Cinder volumes which have all become unavailable.
>>>>> > >>
>>>>> > >> Is there anyone who would be willing to log in to my system and have a look?
>>>>> > >>
>>>>> > >> My cinder volumes are listed as "NOT available" and my attempts to mount them have been in vain. I have tried: vgchange -a y
>>>>> > >>
>>>>> > >> with result showing as: 0 logical volume(s) in volume group "cinder-volumes" now active
>>>>> > >>
>>>>> > >> I am a bit desperate because some of the data is critical and, I am ashamed to say, I do not have a backup.
>>>>> > >>
>>>>> > >> Any help or suggestions would be very much appreciated.
>>>>> > >>
>>>>> > >> FV
>>>>> > >> _______________________________________________
>>>>> > >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>> > >> Post to : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>>>> > >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>> > >
>>>>> >
>>>>> >
>>>>> > _______________________________________________
>>>>> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>> > Post to : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org>
>>>>> > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>
>>>>>
>>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20180321/42712d2e/attachment-0001.html>
More information about the Openstack
mailing list