[Openstack] Adding new Hard disk to Compute Node

Eugen Block eblock at nde.ag
Wed Aug 8 13:36:16 UTC 2018


Okay, I'm really not sure if I understand your setup correctly.

> Server does not add them automatically, I tried to mount them. I tried they
> way they discussed in the page with /dev/sdb only. Other hard disks I have
> mounted them my self. Yes I can see them in lsblk output as below

What do you mean with "tried with /dev/sdb"? I assume this is a fresh  
setup and Cinder didn't work yet, am I right?
The new disks won't be added automatically to your cinder  
configuration, if that's what you expected. You'll have to create new  
physical volumes and then extend the existing VG to use new disks.

> In Nova-Compute logs I can only see main hard disk shown in the the
> complete phys_disk, it was supposed to show more  phys_disk available
> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am
> thinking it in the wrong way, I want increase my compute node disk size to
> launch more VMs)

If you plan to use cinder volumes as disks for your instances, you  
don't need much space in /var/lib/nova/instances but more space  
available for cinder, so you'll need to grow the VG.

Regards


Zitat von Jay See <jayachander.it at gmail.com>:

> Hai,
>
> Thanks for a quick response.
>
> - what do you mean by "disks are not added"? Does the server recognize
> them? Do you see them in the output of "lsblk"?
> Server does not add them automatically, I tried to mount them. I tried they
> way they discussed in the page with /dev/sdb only. Other hard disks I have
> mounted them my self. Yes I can see them in lsblk output as below
> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
> NAME                                          FSTYPE        SIZE
> MOUNTPOINT                   LABEL
> sda                                                         5.5T
> ├─sda1                                        vfat          500M
>                   ESP
> ├─sda2                                        vfat          100M
>                   DIAGS
> └─sda3                                        vfat            2G
>                   OS
> sdb                                                         5.5T
> ├─sdb1                                                      5.5T
> ├─cinder--volumes-cinder--volumes--pool_tmeta                84M
> │ └─cinder--volumes-cinder--volumes--pool                   5.2T
> └─cinder--volumes-cinder--volumes--pool_tdata               5.2T
>   └─cinder--volumes-cinder--volumes--pool                   5.2T
> sdc                                                         5.5T
> └─sdc1                                        xfs           5.5T
> sdd                                                         5.5T
> └─sdd1                                        xfs           5.5T
> /var/lib/nova/instances/sdd1
> sde                                                         5.5T
> └─sde1                                        xfs           5.5T
> /var/lib/nova/instances/sde1
> sdf                                                         5.5T
> └─sdf1                                        xfs           5.5T
> /var/lib/nova/instances/sdf1
> sdg                                                         5.5T
> └─sdg1                                        xfs           5.5T
> /var/lib/nova/instances/sdg1
> sdh                                                         5.5T
> └─sdh1                                        xfs           5.5T
> /var/lib/nova/instances/sdh1
> sdi                                                         5.5T
> └─sdi1                                        xfs           5.5T
> /var/lib/nova/instances/sdi1
> sdj                                                         5.5T
> └─sdj1                                        xfs           5.5T
> /var/lib/nova/instances/sdj1
> sdk                                                         372G
> ├─sdk1                                        ext2          487M /boot
> ├─sdk2                                                        1K
> └─sdk5                                        LVM2_member 371.5G
>   ├─h020--vg-root                             ext4        370.6G /
>   └─h020--vg-swap_1                           swap          976M [SWAP]
>
> - Do you already have existing physical volumes for cinder (assuming you
> deployed cinder with lvm as in the provided link)?
> Yes, I have tried one of the HD (/dev/sdb)
>
> - If the system recognizes the new disks and you deployed cinder with lvm
> you can create a new physical volume and extend your existing volume group
> to have more space for cinder. Is this a failing step or someting else?
> System does not recognize the disks automatically, I have manually mounted
> them or added them to cinder.
>
> In Nova-Compute logs I can only see main hard disk shown in the the
> complete phys_disk, it was supposed to show more  phys_disk available
> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am
> thinking it in the wrong way, I want increase my compute node disk size to
> launch more VMs)
>
> 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker
> [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F
> inal resource view: name=h020 phys_ram=515767MB used_ram=512MB
> *phys_disk=364GB* used_disk=0GB total_vcpus=
> 40 used_vcpus=0 pci_stats=[]
>
> - Please describe more precisely what exactly you tried and what exactly
> fails.
> As explained in the previous point, I want to increase the  phys_disk size
> to use the compute node more efficiently. So to add the HD to compute node
> I am installing cinder on the compute node to add all the HDs.
>
> I might be doing something wrong.
>
> Thanks and Regards,
> Jayachander.
>
> On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block <eblock at nde.ag> wrote:
>
>> Hi,
>>
>> there are a couple of questions rising up:
>>
>> - what do you mean by "disks are not added"? Does the server recognize
>> them? Do you see them in the output of "lsblk"?
>> - Do you already have existing physical volumes for cinder (assuming you
>> deployed cinder with lvm as in the provided link)?
>> - If the system recognizes the new disks and you deployed cinder with lvm
>> you can create a new physical volume and extend your existing volume group
>> to have more space for cinder. Is this a failing step or someting else?
>> - Please describe more precisely what exactly you tried and what exactly
>> fails.
>>
>> The failing neutron-l3-agent shouldn't have to do anything with your disk
>> layout, so it's probably something else.
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Jay See <jayachander.it at gmail.com>:
>>
>> Hai,
>>>
>>> I am installing Openstack Queens on Ubuntu Server.
>>>
>>> My server has extra hard disk(s) apart from main hard disk where
>>> OS(Ubuntu)
>>> is running.
>>>
>>> (
>>> https://docs.openstack.org/cinder/queens/install/cinder-stor
>>> age-install-ubuntu.html
>>> )
>>> As suggested in cinder (above link), I have been trying to add the new
>>> hard
>>> disk but the other hard disks are not getting added.
>>>
>>> Can anyone tell me , what am i missing to add these hard disks?
>>>
>>> Other info : neutron-l3-agent on controller is not running, is it related
>>> to this issue ? I am thinking it is not related to this issue.
>>>
>>> I am new to Openstack.
>>>
>>> ~ Jayachander.
>>> --
>>> P  *SAVE PAPER – Please do not print this e-mail unless absolutely
>>> necessary.*
>>>
>>
>>
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>> k
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>> k
>>
>
>
>
> --
>> P  *SAVE PAPER – Please do not print this e-mail unless absolutely
> necessary.*






More information about the Openstack mailing list