[openstack-dev] [nova][cinder] Extending attached disks

Walter A. Boring IV walter.boring at hp.com
Fri Aug 21 19:06:22 UTC 2015


This isn't as simple as making calls to virsh after an attached volume 
is extended on the cinder backend, especially when multipath is involved.
You need the host system to understand that the volume has changed size 
first, or virsh will really never see it.

For iSCSI/FC volumes you need to issue a rescan on the bus (iSCSI 
session, FC fabric),  and then when multipath is involved, it gets quite 
a bit more complex.

This lead to one of the sticking points with doing this at all, is 
because when cinder extends the volume, it needs to tell nova that it 
has happened, and the nova (or something on the compute node), will have 
to issue the correct commands in sequence for it all to work.

You'll also have to consider multi-attached volumes as well, which adds 
yet another wrinkle.

A good quick source of some of the commands and procedures that are 
needed you can see here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/online-logical-units.html


You can see that volumes with multipath requires a lot of hand holding 
to be done correctly.  It's non trivial.  I see this as being very error 
prone, and any failure
in the multipath process could lead to big problems :(

Walt
> Hi everyone,
>
> Apologises for the duplicate send, looks like my mail client doesn't create very clean HTML messages. Here is the message in plain-text. I'll make sure to send to the list in plain-text from now on.
>
> In my current pre-production deployment we were looking for a method to live extend attached volumes to an instance. This was one of the requirements for deployment. I've worked with libvirt hypervisors before so it didn't take long to find a workable solution. However I'm not sure how transferable this will be across deployment models. Our deployment model is using libvirt for nova and ceph for backend storage. This means obviously libvirt is using rdb to connect to volumes.
>
> Currently the method I use is:
>
> - Force cinder to run an extend operation.
> - Tell Libvirt that the attached disk has been extended.
>
> It would be worth discussing if this can be ported to upstream such that the API can handle the leg work, rather than this current manual method.
>
> Detailed instructions.
> You will need: volume-id of volume you want to resize, hypervisor_hostname and instance_name from instance volume is attached to.
>
> Example: extending volume f9fa66ab-b29a-40f6-b4f4-e9c64a155738 attached to instance-00000012 on node-6 to 100GB
>
> $ cinder reset-state --state available f9fa66ab-b29a-40f6-b4f4-e9c64a155738
> $ cinder extend f9fa66ab-b29a-40f6-b4f4-e9c64a155738 100
> $ cinder reset-state --state in-use f9fa66ab-b29a-40f6-b4f4-e9c64a155738
>
> $ssh node-6
> node-6$ virsh qemu-monitor-command instance-00000012 --hmp "info block" | grep f9fa66ab-b29a-40f6-b4f4-e9c64a155738
> drive-virtio-disk1: removable=0 io-status=ok file=rbd:volumes-slow/volume-f9fa66ab-b29a-40f6-b4f4-e9c64a155738:id=cinder:key=<keyhere>==:auth_supported=cephx\\;none:mon_host=10.1.226.64\\:6789\\;10.1.226.65\\:6789\\;10.1.226.66\\:6789 ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
>
> This will get you the disk-id, which in this case is drive-virtio-disk1.
>
> node-6$ virsh qemu-monitor-command instance-00000012 --hmp "block_resize drive-virtio-disk1 100G"
>
> Finally, you need to perform a drive rescan on the actual instance and resize and extend the file-system. This will be OS specific.
>
> I've tested this a few times and it seems very reliable.
>
> Taylor Bertie
> Enterprise Support Infrastructure Engineer
>
> Mobile +64 27 952 3949
> Phone +64 4 462 5030
> Email taylor.bertie at solnet.co.nz
>
> Solnet Solutions Limited
> Level 12, Solnet House
> 70 The Terrace, Wellington 6011
> PO Box 397, Wellington 6140
>
> www.solnet.co.nz
>
> Attention:
> This email may contain information intended for the sole use of
> the original recipient. Please respect this when sharing or
> disclosing this email's contents with any third party. If you
> believe you have received this email in error, please delete it
> and notify the sender or postmaster at solnetsolutions.co.nz as
> soon as possible. The content of this email does not necessarily
> reflect the views of Solnet Solutions Ltd.
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list