[Openstack] Adding new Hard disk to Compute Node

Jay See jayachander.it at gmail.com
Wed Aug 8 17:30:38 UTC 2018


Hai Eugen,

Thanks for your suggestions and I went back to find more about adding the
new HD to VG. I think it was successful. (Logs are at the end of the mail)

Followed this link -
https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group

But still on the nova-compute logs it still shows wrong phys_disk size.
Even in the horizon it doesn't get updated with the new HD added to compute
node.

2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker
[req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource view:
name=h020 phys_ram=515767MB used_ram=512MB *phys_disk=364GB *used_disk=0GB
total_vcpus=40 used_vcpus=0 pci_stats=[]

I understood they are not supposed to be mounted on /var/lib/nova/instances
so removed them now.

Thanks
Jay.


root at h020:~# vgdisplay
  --- Volume group ---
  *VG Name               h020-vg*
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               371.52 GiB
  PE Size               4.00 MiB
  Total PE              95109
*  Alloc PE / Size       95105 / 371.50 GiB*
*  Free  PE / Size       4 / 16.00 MiB*
  VG UUID               4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U

root at h020:~# pvcreate */dev/sdb1*
  Physical volume "/dev/sdb1" successfully created
root at h020:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdk5
  VG Name               h020-vg
  PV Size               371.52 GiB / not usable 2.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              95109
  Free PE               4
  Allocated PE          95105
  PV UUID               BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR

  "/dev/sdb1" is a new physical volume of "5.46 TiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name
  PV Size               5.46 TiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443

root at h020:~# vgextend /dev/h020-vg /dev/sdb1
  Volume group "h020-vg" successfully extended
root at h020:~# vgdisplay
  --- Volume group ---
  VG Name               h020-vg
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               5.82 TiB
  PE Size               4.00 MiB
  Total PE              1525900
*  Alloc PE / Size       95105 / 371.50 GiB*
*  Free  PE / Size       1430795 / 5.46 TiB*
  VG UUID               4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U

root at h020:~# service nova-compute restart
root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME                FSTYPE        SIZE MOUNTPOINT LABEL
sda                               5.5T
├─sda1              vfat          500M            ESP
├─sda2              vfat          100M            DIAGS
└─sda3              vfat            2G            OS
sdb                               5.5T
└─sdb1              LVM2_member   5.5T
sdk                               372G
├─sdk1              ext2          487M /boot
├─sdk2                              1K
└─sdk5              LVM2_member 371.5G
  ├─h020--vg-root   ext4        370.6G /
  └─h020--vg-swap_1 swap          976M [SWAP]
root at h020:~# pvscan
  PV /dev/sdk5   VG h020-vg         lvm2 [371.52 GiB / 16.00 MiB free]
  PV /dev/sdb1   VG h020-vg         lvm2 [5.46 TiB / 5.46 TiB free]
  Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0   ]
root at h020:~# vgs
  VG      #PV #LV #SN Attr   VSize VFree
  h020-vg   2   2   0 wz--n- 5.82t 5.46t
root at h020:~# vi /var/log/nova/nova-compute.log
root at h020:~#


On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block <eblock at nde.ag> wrote:

> Okay, I'm really not sure if I understand your setup correctly.
>
> Server does not add them automatically, I tried to mount them. I tried they
>> way they discussed in the page with /dev/sdb only. Other hard disks I have
>> mounted them my self. Yes I can see them in lsblk output as below
>>
>
> What do you mean with "tried with /dev/sdb"? I assume this is a fresh
> setup and Cinder didn't work yet, am I right?
> The new disks won't be added automatically to your cinder configuration,
> if that's what you expected. You'll have to create new physical volumes and
> then extend the existing VG to use new disks.
>
> In Nova-Compute logs I can only see main hard disk shown in the the
>> complete phys_disk, it was supposed to show more  phys_disk available
>> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am
>> thinking it in the wrong way, I want increase my compute node disk size to
>> launch more VMs)
>>
>
> If you plan to use cinder volumes as disks for your instances, you don't
> need much space in /var/lib/nova/instances but more space available for
> cinder, so you'll need to grow the VG.
>
> Regards
>
>
> Zitat von Jay See <jayachander.it at gmail.com>:
>
> Hai,
>>
>> Thanks for a quick response.
>>
>> - what do you mean by "disks are not added"? Does the server recognize
>> them? Do you see them in the output of "lsblk"?
>> Server does not add them automatically, I tried to mount them. I tried
>> they
>> way they discussed in the page with /dev/sdb only. Other hard disks I have
>> mounted them my self. Yes I can see them in lsblk output as below
>> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
>> NAME                                          FSTYPE        SIZE
>> MOUNTPOINT                   LABEL
>> sda                                                         5.5T
>> ├─sda1                                        vfat          500M
>>                   ESP
>> ├─sda2                                        vfat          100M
>>                   DIAGS
>> └─sda3                                        vfat            2G
>>                   OS
>> sdb                                                         5.5T
>> ├─sdb1                                                      5.5T
>> ├─cinder--volumes-cinder--volumes--pool_tmeta                84M
>> │ └─cinder--volumes-cinder--volumes--pool                   5.2T
>> └─cinder--volumes-cinder--volumes--pool_tdata               5.2T
>>   └─cinder--volumes-cinder--volumes--pool                   5.2T
>> sdc                                                         5.5T
>> └─sdc1                                        xfs           5.5T
>> sdd                                                         5.5T
>> └─sdd1                                        xfs           5.5T
>> /var/lib/nova/instances/sdd1
>> sde                                                         5.5T
>> └─sde1                                        xfs           5.5T
>> /var/lib/nova/instances/sde1
>> sdf                                                         5.5T
>> └─sdf1                                        xfs           5.5T
>> /var/lib/nova/instances/sdf1
>> sdg                                                         5.5T
>> └─sdg1                                        xfs           5.5T
>> /var/lib/nova/instances/sdg1
>> sdh                                                         5.5T
>> └─sdh1                                        xfs           5.5T
>> /var/lib/nova/instances/sdh1
>> sdi                                                         5.5T
>> └─sdi1                                        xfs           5.5T
>> /var/lib/nova/instances/sdi1
>> sdj                                                         5.5T
>> └─sdj1                                        xfs           5.5T
>> /var/lib/nova/instances/sdj1
>> sdk                                                         372G
>> ├─sdk1                                        ext2          487M /boot
>> ├─sdk2                                                        1K
>> └─sdk5                                        LVM2_member 371.5G
>>   ├─h020--vg-root                             ext4        370.6G /
>>   └─h020--vg-swap_1                           swap          976M [SWAP]
>>
>> - Do you already have existing physical volumes for cinder (assuming you
>> deployed cinder with lvm as in the provided link)?
>> Yes, I have tried one of the HD (/dev/sdb)
>>
>> - If the system recognizes the new disks and you deployed cinder with lvm
>> you can create a new physical volume and extend your existing volume group
>> to have more space for cinder. Is this a failing step or someting else?
>> System does not recognize the disks automatically, I have manually mounted
>> them or added them to cinder.
>>
>> In Nova-Compute logs I can only see main hard disk shown in the the
>> complete phys_disk, it was supposed to show more  phys_disk available
>> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am
>> thinking it in the wrong way, I want increase my compute node disk size to
>> launch more VMs)
>>
>> 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker
>> [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F
>> inal resource view: name=h020 phys_ram=515767MB used_ram=512MB
>> *phys_disk=364GB* used_disk=0GB total_vcpus=
>>
>> 40 used_vcpus=0 pci_stats=[]
>>
>> - Please describe more precisely what exactly you tried and what exactly
>> fails.
>> As explained in the previous point, I want to increase the  phys_disk size
>> to use the compute node more efficiently. So to add the HD to compute node
>> I am installing cinder on the compute node to add all the HDs.
>>
>> I might be doing something wrong.
>>
>> Thanks and Regards,
>> Jayachander.
>>
>> On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block <eblock at nde.ag> wrote:
>>
>> Hi,
>>>
>>> there are a couple of questions rising up:
>>>
>>> - what do you mean by "disks are not added"? Does the server recognize
>>> them? Do you see them in the output of "lsblk"?
>>> - Do you already have existing physical volumes for cinder (assuming you
>>> deployed cinder with lvm as in the provided link)?
>>> - If the system recognizes the new disks and you deployed cinder with lvm
>>> you can create a new physical volume and extend your existing volume
>>> group
>>> to have more space for cinder. Is this a failing step or someting else?
>>> - Please describe more precisely what exactly you tried and what exactly
>>> fails.
>>>
>>> The failing neutron-l3-agent shouldn't have to do anything with your disk
>>> layout, so it's probably something else.
>>>
>>> Regards,
>>> Eugen
>>>
>>>
>>> Zitat von Jay See <jayachander.it at gmail.com>:
>>>
>>> Hai,
>>>
>>>>
>>>> I am installing Openstack Queens on Ubuntu Server.
>>>>
>>>> My server has extra hard disk(s) apart from main hard disk where
>>>> OS(Ubuntu)
>>>> is running.
>>>>
>>>> (
>>>> https://docs.openstack.org/cinder/queens/install/cinder-stor
>>>> age-install-ubuntu.html
>>>> )
>>>> As suggested in cinder (above link), I have been trying to add the new
>>>> hard
>>>> disk but the other hard disks are not getting added.
>>>>
>>>> Can anyone tell me , what am i missing to add these hard disks?
>>>>
>>>> Other info : neutron-l3-agent on controller is not running, is it
>>>> related
>>>> to this issue ? I am thinking it is not related to this issue.
>>>>
>>>> I am new to Openstack.
>>>>
>>>> ~ Jayachander.
>>>> --
>>>> P  *SAVE PAPER – Please do not print this e-mail unless absolutely
>>>> necessary.*
>>>>
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstac
>>> k
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstac
>>> k
>>>
>>>
>>
>>
>> --
>>>> P  *SAVE PAPER – Please do not print this e-mail unless absolutely
>> necessary.*
>>
>
>
>
>


-- 
​
P  *SAVE PAPER – Please do not print this e-mail unless absolutely
necessary.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20180808/23de6185/attachment-0001.html>


More information about the Openstack mailing list