[Openstack] Adding new Hard disk to Compute Node
Jay See
jayachander.it at gmail.com
Thu Aug 9 18:34:17 UTC 2018
Hai Bernd Bausch,
Thanks for your help.
As you said , I am not completely familiar with all the underlying
concepts. But I am trying to learn thanks for pointing me in the right
direction.
Now, I have achieved what I wanted. I have followed your second suggestion
with some more reading in to LVM (as I am not complete aware of things in
linux yet).
Regarding your other suggestion with more Linux concepts, I need to do work
on them as well (not at the moment).
Thanks.
Jay.
On Thu, Aug 9, 2018 at 2:37 AM, Bernd Bausch <berndbausch at gmail.com> wrote:
> Your node uses logical volume *h020--vg-root* as its root filesystem.
> This logical volume has a size of 370GB:
>
> # lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
> NAME FSTYPE SIZE MOUNTPOINT LABEL
> (...)
> └─sdk5 LVM2_member 371.5G
> * ├─h020--vg-root ext4 370.6G /*
> └─h020--vg-swap_1 swap 976M [SWAP]
>
> Now you created another physical volume, */dev/sdb1*, and added it to
> volume group *h020-vg*. This increases the size of the *volume group*,
> but not the size of the *logical volume*.
>
> If you want to provide more space to instances' ephemeral storage, you
> could:
>
> - increase the size of root volume *h020--vg-root* using the *lvextend*
> command, then increase the size of the filesystem on it. I believe that
> this requires a reboot, since it's the root filesystem.
>
> or
>
> - create another logical volume, e.g. lvcreate -L1000GB -n
> lv-instances h020-vg for a 1000GB logical volume, and mount it under
> */var/lib/nova/instances*: mount /dev/h020-vg/lv-instances
> /var/lib/nova/instances
> (before mounting, create a filesystem on *lv-instances* and transfer
> the data from */var/lib/nova/instances* to the new filesystem. Also,
> don't forget to persist the mount by adding it to */etc/fstab*)
>
> The second option is by far better, in my opinion, as you should separate
> operating system files from OpenStack data.
>
> You say that you are new to OpenStack. That's fine, but you seem to be
> lacking the fundamentals of Linux system management as well. You can't
> learn OpenStack without a certain level of Linux skills. At least learn
> about LVM (it's not that hard) and filesystems. You will also need to have
> networking fundamentals and Linux networking tools under your belt.
>
> Good luck!
>
> Bernd Bausch
>
>
> On 8/9/2018 2:30 AM, Jay See wrote:
>
> Hai Eugen,
>
> Thanks for your suggestions and I went back to find more about adding the
> new HD to VG. I think it was successful. (Logs are at the end of the mail)
>
> Followed this link - https://www.howtoforge.com/
> logical-volume-manager-how-can-i-extend-a-volume-group
>
> But still on the nova-compute logs it still shows wrong phys_disk size.
> Even in the horizon it doesn't get updated with the new HD added to compute
> node.
>
> 2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker
> [req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource view:
> name=h020 phys_ram=515767MB used_ram=512MB *phys_disk=364GB *used_disk=0GB
> total_vcpus=40 used_vcpus=0 pci_stats=[]
>
> I understood they are not supposed to be mounted on /var/lib/nova/instances
> so removed them now.
>
> Thanks
> Jay.
>
>
> root at h020:~# vgdisplay
> --- Volume group ---
> *VG Name h020-vg*
> System ID
> Format lvm2
> Metadata Areas 1
> Metadata Sequence No 3
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 2
> Open LV 2
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 371.52 GiB
> PE Size 4.00 MiB
> Total PE 95109
> * Alloc PE / Size 95105 / 371.50 GiB*
> * Free PE / Size 4 / 16.00 MiB*
> VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U
>
> root at h020:~# pvcreate */dev/sdb1*
> Physical volume "/dev/sdb1" successfully created
> root at h020:~# pvdisplay
> --- Physical volume ---
> PV Name /dev/sdk5
> VG Name h020-vg
> PV Size 371.52 GiB / not usable 2.00 MiB
> Allocatable yes
> PE Size 4.00 MiB
> Total PE 95109
> Free PE 4
> Allocated PE 95105
> PV UUID BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR
>
> "/dev/sdb1" is a new physical volume of "5.46 TiB"
> --- NEW Physical volume ---
> PV Name /dev/sdb1
> VG Name
> PV Size 5.46 TiB
> Allocatable NO
> PE Size 0
> Total PE 0
> Free PE 0
> Allocated PE 0
> PV UUID CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443
>
> root at h020:~# vgextend /dev/h020-vg /dev/sdb1
> Volume group "h020-vg" successfully extended
> root at h020:~# vgdisplay
> --- Volume group ---
> VG Name h020-vg
> System ID
> Format lvm2
> Metadata Areas 2
> Metadata Sequence No 4
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 2
> Open LV 2
> Max PV 0
> Cur PV 2
> Act PV 2
> VG Size 5.82 TiB
> PE Size 4.00 MiB
> Total PE 1525900
> * Alloc PE / Size 95105 / 371.50 GiB*
> * Free PE / Size 1430795 / 5.46 TiB*
> VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U
>
> root at h020:~# service nova-compute restart
> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
> NAME FSTYPE SIZE MOUNTPOINT LABEL
> sda 5.5T
> ├─sda1 vfat 500M ESP
> ├─sda2 vfat 100M DIAGS
> └─sda3 vfat 2G OS
> sdb 5.5T
> └─sdb1 LVM2_member 5.5T
> sdk 372G
> ├─sdk1 ext2 487M /boot
> ├─sdk2 1K
> └─sdk5 LVM2_member 371.5G
> ├─h020--vg-root ext4 370.6G /
> └─h020--vg-swap_1 swap 976M [SWAP]
> root at h020:~# pvscan
> PV /dev/sdk5 VG h020-vg lvm2 [371.52 GiB / 16.00 MiB free]
> PV /dev/sdb1 VG h020-vg lvm2 [5.46 TiB / 5.46 TiB free]
> Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0 ]
> root at h020:~# vgs
> VG #PV #LV #SN Attr VSize VFree
> h020-vg 2 2 0 wz--n- 5.82t 5.46t
> root at h020:~# vi /var/log/nova/nova-compute.log
> root at h020:~#
>
>
> On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block <eblock at nde.ag> wrote:
>
>> Okay, I'm really not sure if I understand your setup correctly.
>>
>> Server does not add them automatically, I tried to mount them. I tried
>>> they
>>> way they discussed in the page with /dev/sdb only. Other hard disks I
>>> have
>>> mounted them my self. Yes I can see them in lsblk output as below
>>>
>>
>> What do you mean with "tried with /dev/sdb"? I assume this is a fresh
>> setup and Cinder didn't work yet, am I right?
>> The new disks won't be added automatically to your cinder configuration,
>> if that's what you expected. You'll have to create new physical volumes and
>> then extend the existing VG to use new disks.
>>
>> In Nova-Compute logs I can only see main hard disk shown in the the
>>> complete phys_disk, it was supposed to show more phys_disk available
>>> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I
>>> am
>>> thinking it in the wrong way, I want increase my compute node disk size
>>> to
>>> launch more VMs)
>>>
>>
>> If you plan to use cinder volumes as disks for your instances, you don't
>> need much space in /var/lib/nova/instances but more space available for
>> cinder, so you'll need to grow the VG.
>>
>> Regards
>>
>>
>> Zitat von Jay See <jayachander.it at gmail.com>:
>>
>> Hai,
>>>
>>> Thanks for a quick response.
>>>
>>> - what do you mean by "disks are not added"? Does the server recognize
>>> them? Do you see them in the output of "lsblk"?
>>> Server does not add them automatically, I tried to mount them. I tried
>>> they
>>> way they discussed in the page with /dev/sdb only. Other hard disks I
>>> have
>>> mounted them my self. Yes I can see them in lsblk output as below
>>> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
>>> NAME FSTYPE SIZE
>>> MOUNTPOINT LABEL
>>> sda 5.5T
>>> ├─sda1 vfat 500M
>>> ESP
>>> ├─sda2 vfat 100M
>>> DIAGS
>>> └─sda3 vfat 2G
>>> OS
>>> sdb 5.5T
>>> ├─sdb1 5.5T
>>> ├─cinder--volumes-cinder--volumes--pool_tmeta 84M
>>> │ └─cinder--volumes-cinder--volumes--pool 5.2T
>>> └─cinder--volumes-cinder--volumes--pool_tdata 5.2T
>>> └─cinder--volumes-cinder--volumes--pool 5.2T
>>> sdc 5.5T
>>> └─sdc1 xfs 5.5T
>>> sdd 5.5T
>>> └─sdd1 xfs 5.5T
>>> /var/lib/nova/instances/sdd1
>>> sde 5.5T
>>> └─sde1 xfs 5.5T
>>> /var/lib/nova/instances/sde1
>>> sdf 5.5T
>>> └─sdf1 xfs 5.5T
>>> /var/lib/nova/instances/sdf1
>>> sdg 5.5T
>>> └─sdg1 xfs 5.5T
>>> /var/lib/nova/instances/sdg1
>>> sdh 5.5T
>>> └─sdh1 xfs 5.5T
>>> /var/lib/nova/instances/sdh1
>>> sdi 5.5T
>>> └─sdi1 xfs 5.5T
>>> /var/lib/nova/instances/sdi1
>>> sdj 5.5T
>>> └─sdj1 xfs 5.5T
>>> /var/lib/nova/instances/sdj1
>>> sdk 372G
>>> ├─sdk1 ext2 487M /boot
>>> ├─sdk2 1K
>>> └─sdk5 LVM2_member 371.5G
>>> ├─h020--vg-root ext4 370.6G /
>>> └─h020--vg-swap_1 swap 976M [SWAP]
>>>
>>> - Do you already have existing physical volumes for cinder (assuming you
>>> deployed cinder with lvm as in the provided link)?
>>> Yes, I have tried one of the HD (/dev/sdb)
>>>
>>> - If the system recognizes the new disks and you deployed cinder with lvm
>>> you can create a new physical volume and extend your existing volume
>>> group
>>> to have more space for cinder. Is this a failing step or someting else?
>>> System does not recognize the disks automatically, I have manually
>>> mounted
>>> them or added them to cinder.
>>>
>>> In Nova-Compute logs I can only see main hard disk shown in the the
>>> complete phys_disk, it was supposed to show more phys_disk available
>>> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I
>>> am
>>> thinking it in the wrong way, I want increase my compute node disk size
>>> to
>>> launch more VMs)
>>>
>>> 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker
>>> [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F
>>> inal resource view: name=h020 phys_ram=515767MB used_ram=512MB
>>> *phys_disk=364GB* used_disk=0GB total_vcpus=
>>>
>>> 40 used_vcpus=0 pci_stats=[]
>>>
>>> - Please describe more precisely what exactly you tried and what exactly
>>> fails.
>>> As explained in the previous point, I want to increase the phys_disk
>>> size
>>> to use the compute node more efficiently. So to add the HD to compute
>>> node
>>> I am installing cinder on the compute node to add all the HDs.
>>>
>>> I might be doing something wrong.
>>>
>>> Thanks and Regards,
>>> Jayachander.
>>>
>>> On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block <eblock at nde.ag> wrote:
>>>
>>> Hi,
>>>>
>>>> there are a couple of questions rising up:
>>>>
>>>> - what do you mean by "disks are not added"? Does the server recognize
>>>> them? Do you see them in the output of "lsblk"?
>>>> - Do you already have existing physical volumes for cinder (assuming you
>>>> deployed cinder with lvm as in the provided link)?
>>>> - If the system recognizes the new disks and you deployed cinder with
>>>> lvm
>>>> you can create a new physical volume and extend your existing volume
>>>> group
>>>> to have more space for cinder. Is this a failing step or someting else?
>>>> - Please describe more precisely what exactly you tried and what exactly
>>>> fails.
>>>>
>>>> The failing neutron-l3-agent shouldn't have to do anything with your
>>>> disk
>>>> layout, so it's probably something else.
>>>>
>>>> Regards,
>>>> Eugen
>>>>
>>>>
>>>> Zitat von Jay See <jayachander.it at gmail.com>:
>>>>
>>>> Hai,
>>>>
>>>>>
>>>>> I am installing Openstack Queens on Ubuntu Server.
>>>>>
>>>>> My server has extra hard disk(s) apart from main hard disk where
>>>>> OS(Ubuntu)
>>>>> is running.
>>>>>
>>>>> (
>>>>> https://docs.openstack.org/cinder/queens/install/cinder-stor
>>>>> age-install-ubuntu.html
>>>>> )
>>>>> As suggested in cinder (above link), I have been trying to add the new
>>>>> hard
>>>>> disk but the other hard disks are not getting added.
>>>>>
>>>>> Can anyone tell me , what am i missing to add these hard disks?
>>>>>
>>>>> Other info : neutron-l3-agent on controller is not running, is it
>>>>> related
>>>>> to this issue ? I am thinking it is not related to this issue.
>>>>>
>>>>> I am new to Openstack.
>>>>>
>>>>> ~ Jayachander.
>>>>> --
>>>>> P *SAVE PAPER – Please do not print this e-mail unless absolutely
>>>>> necessary.*
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list: http://lists.openstack.org/cgi
>>>> -bin/mailman/listinfo/openstac
>>>> k
>>>> Post to : openstack at lists.openstack.org
>>>> Unsubscribe : http://lists.openstack.org/cgi
>>>> -bin/mailman/listinfo/openstac
>>>> k
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> P *SAVE PAPER – Please do not print this e-mail unless absolutely
>>> necessary.*
>>>
>>
>>
>>
>>
>
>
> --
>
> P *SAVE PAPER – Please do not print this e-mail unless absolutely
> necessary.*
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
--
P *SAVE PAPER – Please do not print this e-mail unless absolutely
necessary.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20180809/bb3ea1be/attachment.html>
More information about the Openstack
mailing list