<div dir="ltr">Hai Eugen,<div><br></div><div>Thanks for your suggestions and I went back to find more about adding the new HD to VG. I think it was successful. (Logs are at the end of the mail)</div><div><br></div><div>Followed this link -Â <a href="https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group">https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group</a></div><div><br></div><div>
<div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">But still on the nova-compute logs it still shows wrong <span style="font-size:small;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">phys_disk size. Even in the horizon it doesn't get updated with the new HD added to compute node.</span></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker [req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource view: name=h020 phys_ram=515767MB used_ram=512MB <b>phys_disk=364GB<span> </span></b>used_disk=0GB total_vcpus=40 used_vcpus=0 pci_stats=[]<br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><br></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">I understood they are not supposed to be mounted on<span style="font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"><span> </span>/var/lib/nova/instances so removed them now.</span></div><br class="gmail-Apple-interchange-newline">
<div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Thanks</span></div><div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial"><span style="font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Jay.</span></div>
<br></div><div><br></div><div>root@h020:~# vgdisplay</div><div> --- Volume group ---</div><div> <b>VG Name        h020-vg</b></div><div> System ID</div><div> Format        lvm2</div><div> Metadata Areas    1</div><div> Metadata Sequence No 3</div><div> VG Access       read/write</div><div> VG Status       resizable</div><div> MAX LV        0</div><div> Cur LV        2</div><div> Open LV        2</div><div> Max PV        0</div><div> Cur PV        1</div><div> Act PV        1</div><div> VG Size        371.52 GiB</div><div> PE Size        4.00 MiB</div><div> Total PE       95109</div><div><b> Alloc PE / Size    95105 / 371.50 GiB</b></div><div><b> Free PE / Size    4 / 16.00 MiB</b></div><div> VG UUID        4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U</div><div><br></div><div>root@h020:~# pvcreate <b>/dev/sdb1</b></div><div> Physical volume "/dev/sdb1" successfully created</div><div>root@h020:~# pvdisplay</div><div> --- Physical volume ---</div><div> PV Name        /dev/sdk5</div><div> VG Name        h020-vg</div><div> PV Size        371.52 GiB / not usable 2.00 MiB</div><div> Allocatable      yes</div><div> PE Size        4.00 MiB</div><div> Total PE       95109</div><div> Free PE        4</div><div> Allocated PE     95105</div><div> PV UUID        BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR</div><div><br></div><div> "/dev/sdb1" is a new physical volume of "5.46 TiB"</div><div> --- NEW Physical volume ---</div><div> PV Name        /dev/sdb1</div><div> VG Name</div><div> PV Size        5.46 TiB</div><div> Allocatable      NO</div><div> PE Size        0</div><div> Total PE       0</div><div> Free PE        0</div><div> Allocated PE     0</div><div> PV UUID        CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443</div><div><br></div><div>root@h020:~# vgextend /dev/h020-vg /dev/sdb1<br></div><div> Volume group "h020-vg" successfully extended</div><div>root@h020:~# vgdisplay</div><div> --- Volume group ---</div><div> VG Name        h020-vg</div><div> System ID</div><div> Format        lvm2</div><div> Metadata Areas    2</div><div> Metadata Sequence No 4</div><div> VG Access       read/write</div><div> VG Status       resizable</div><div> MAX LV        0</div><div> Cur LV        2</div><div> Open LV        2</div><div> Max PV        0</div><div> Cur PV        2</div><div> Act PV        2</div><div> VG Size        5.82 TiB</div><div> PE Size        4.00 MiB</div><div> Total PE       1525900</div><div><b> Alloc PE / Size    95105 / 371.50 GiB</b></div><div><b> Free PE / Size    1430795 / 5.46 TiB</b></div><div> VG UUID        4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U</div><div><br></div><div>root@h020:~# service nova-compute restart<br></div><div>root@h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL</div><div>NAME        FSTYPE    SIZE MOUNTPOINT LABEL</div><div>sda                5.5T</div><div>├─sda1       vfat     500M      ESP</div><div>├─sda2       vfat     100M      DIAGS</div><div>└─sda3       vfat      2G      OS</div><div>sdb                5.5T</div><div>└─sdb1       LVM2_member  5.5T</div><div>sdk                372G<br></div><div>├─sdk1       ext2     487M /boot</div><div>├─sdk2               1K</div><div>└─sdk5       LVM2_member 371.5G</div><div> ├─h020--vg-root  ext4    370.6G /</div><div> └─h020--vg-swap_1 swap     976M [SWAP]</div><div>root@h020:~# pvscan<br></div><div> PV /dev/sdk5  VG h020-vg     lvm2 [371.52 GiB / 16.00 MiB free]</div><div> PV /dev/sdb1  VG h020-vg     lvm2 [5.46 TiB / 5.46 TiB free]</div><div> Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0  ]</div><div>root@h020:~# vgs</div><div> VG   #PV #LV #SN Attr  VSize VFree</div><div> h020-vg  2  2  0 wz--n- 5.82t 5.46t</div><div>
<div style="font-size:small;text-decoration-style:initial;text-decoration-color:initial">root@h020:~# vi /var/log/nova/nova-compute.log</div>root@h020:~#Â </div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block <span dir="ltr"><<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Okay, I'm really not sure if I understand your setup correctly.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Server does not add them automatically, I tried to mount them. I tried they<br>
way they discussed in the page with /dev/sdb only. Other hard disks I have<br>
mounted them my self. Yes I can see them in lsblk output as below<br>
</blockquote>
<br></span>
What do you mean with "tried with /dev/sdb"? I assume this is a fresh setup and Cinder didn't work yet, am I right?<br>
The new disks won't be added automatically to your cinder configuration, if that's what you expected. You'll have to create new physical volumes and then extend the existing VG to use new disks.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
In Nova-Compute logs I can only see main hard disk shown in the the<br>
complete phys_disk, it was supposed to show more phys_disk available<br>
atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am<br>
thinking it in the wrong way, I want increase my compute node disk size to<br>
launch more VMs)<br>
</blockquote>
<br></span>
If you plan to use cinder volumes as disks for your instances, you don't need much space in /var/lib/nova/instances but more space available for cinder, so you'll need to grow the VG.<br>
<br>
Regards<span class=""><br>
<br>
<br>
Zitat von Jay See <<a href="mailto:jayachander.it@gmail.com" target="_blank">jayachander.it@gmail.com</a>>:<br>
<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hai,<br>
<br><div><div class="h5">
Thanks for a quick response.<br>
<br>
- what do you mean by "disks are not added"? Does the server recognize<br>
them? Do you see them in the output of "lsblk"?<br>
Server does not add them automatically, I tried to mount them. I tried they<br>
way they discussed in the page with /dev/sdb only. Other hard disks I have<br>
mounted them my self. Yes I can see them in lsblk output as below<br>
root@h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LA<wbr>BEL<br>
NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FSTYPEÂ Â Â Â SIZE<br>
MOUNTPOINTÂ Â Â Â Â Â Â Â Â Â LABEL<br>
sda                             5.5T<br>
├─sda1                    vfat     500M<br>
         ESP<br>
├─sda2                    vfat     100M<br>
         DIAGS<br>
└─sda3                    vfat      2G<br>
         OS<br>
sdb                             5.5T<br>
├─sdb1                           5.5T<br>
├─cinder--volumes-cinder--volu<wbr>mes--pool_tmeta        84M<br>
│ └─cinder--volumes-cinder--volu<wbr>mes--pool          5.2T<br>
└─cinder--volumes-cinder--volu<wbr>mes--pool_tdata        5.2T<br>
 └─cinder--volumes-cinder--volu<wbr>mes--pool          5.2T<br>
sdc                             5.5T<br>
└─sdc1                    xfs      5.5T<br>
sdd                             5.5T<br>
└─sdd1                    xfs      5.5T<br>
/var/lib/nova/instances/sdd1<br>
sde                             5.5T<br>
└─sde1                    xfs      5.5T<br>
/var/lib/nova/instances/sde1<br>
sdf                             5.5T<br>
└─sdf1                    xfs      5.5T<br>
/var/lib/nova/instances/sdf1<br>
sdg                             5.5T<br>
└─sdg1                    xfs      5.5T<br>
/var/lib/nova/instances/sdg1<br>
sdh                             5.5T<br>
└─sdh1                    xfs      5.5T<br>
/var/lib/nova/instances/sdh1<br>
sdi                             5.5T<br>
└─sdi1                    xfs      5.5T<br>
/var/lib/nova/instances/sdi1<br>
sdj                             5.5T<br>
└─sdj1                    xfs      5.5T<br>
/var/lib/nova/instances/sdj1<br>
sdk                             372G<br>
├─sdk1                    ext2     487M /boot<br>
├─sdk2                            1K<br>
└─sdk5                    LVM2_member 371.5G<br>
 ├─h020--vg-root               ext4    370.6G /<br>
 └─h020--vg-swap_1              swap     976M [SWAP]<br>
<br>
- Do you already have existing physical volumes for cinder (assuming you<br>
deployed cinder with lvm as in the provided link)?<br>
Yes, I have tried one of the HD (/dev/sdb)<br>
<br>
- If the system recognizes the new disks and you deployed cinder with lvm<br>
you can create a new physical volume and extend your existing volume group<br>
to have more space for cinder. Is this a failing step or someting else?<br>
System does not recognize the disks automatically, I have manually mounted<br>
them or added them to cinder.<br>
<br>
In Nova-Compute logs I can only see main hard disk shown in the the<br>
complete phys_disk, it was supposed to show more phys_disk available<br>
atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am<br>
thinking it in the wrong way, I want increase my compute node disk size to<br>
launch more VMs)<br>
<br>
2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker<br>
[req-a180079f-d7c0-4430-9c14-3<wbr>14ac4d0832b - - - - -] F<br>
inal resource view: name=h020 phys_ram=515767MB used_ram=512MB<br></div></div>
*phys_disk=364GB* used_disk=0GB total_vcpus=<div><div class="h5"><br>
40 used_vcpus=0 pci_stats=[]<br>
<br>
- Please describe more precisely what exactly you tried and what exactly<br>
fails.<br>
As explained in the previous point, I want to increase the phys_disk size<br>
to use the compute node more efficiently. So to add the HD to compute node<br>
I am installing cinder on the compute node to add all the HDs.<br>
<br>
I might be doing something wrong.<br>
<br>
Thanks and Regards,<br>
Jayachander.<br>
<br>
On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block <<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
there are a couple of questions rising up:<br>
<br>
- what do you mean by "disks are not added"? Does the server recognize<br>
them? Do you see them in the output of "lsblk"?<br>
- Do you already have existing physical volumes for cinder (assuming you<br>
deployed cinder with lvm as in the provided link)?<br>
- If the system recognizes the new disks and you deployed cinder with lvm<br>
you can create a new physical volume and extend your existing volume group<br>
to have more space for cinder. Is this a failing step or someting else?<br>
- Please describe more precisely what exactly you tried and what exactly<br>
fails.<br>
<br>
The failing neutron-l3-agent shouldn't have to do anything with your disk<br>
layout, so it's probably something else.<br>
<br>
Regards,<br>
Eugen<br>
<br>
<br>
Zitat von Jay See <<a href="mailto:jayachander.it@gmail.com" target="_blank">jayachander.it@gmail.com</a>>:<br>
<br>
Hai,<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I am installing Openstack Queens on Ubuntu Server.<br>
<br>
My server has extra hard disk(s) apart from main hard disk where<br>
OS(Ubuntu)<br>
is running.<br>
<br>
(<br>
<a href="https://docs.openstack.org/cinder/queens/install/cinder-stor" rel="noreferrer" target="_blank">https://docs.openstack.org/cin<wbr>der/queens/install/cinder-stor</a><br>
age-install-ubuntu.html<br>
)<br>
As suggested in cinder (above link), I have been trying to add the new<br>
hard<br>
disk but the other hard disks are not getting added.<br>
<br>
Can anyone tell me , what am i missing to add these hard disks?<br>
<br>
Other info : neutron-l3-agent on controller is not running, is it related<br>
to this issue ? I am thinking it is not related to this issue.<br>
<br>
I am new to Openstack.<br>
<br>
~ Jayachander.<br>
--<br>
P *SAVE PAPER – Please do not print this e-mail unless absolutely<br>
necessary.*<br>
<br>
</blockquote>
<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac</a><br>
k<br>
Post to   : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac</a><br>
k<br>
<br>
</blockquote>
<br>
<br>
<br>
--<br>
​<br>
P *SAVE PAPER – Please do not print this e-mail unless absolutely<br>
necessary.*<br>
</div></div></blockquote>
<br>
<br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><span lang="EN-GB" style="font-size:18pt;font-family:Webdings;color:green"><div style="font-family:arial,helvetica,sans-serif;display:inline">​</div>P</span><span lang="EN-US" style="font-size:8pt;font-family:Arial,sans-serif;color:green">  </span><b style="font-family:Calibri,sans-serif;font-size:14.44444465637207px"><span lang="EN-US" style="font-size:8pt;color:green">SAVE PAPER – Please do not print this e-mail unless absolutely necessary.</span></b><br></div></div>
</div>