[Openstack-operators] Cinder attach volume
Delatte, Craig
craig.delatte at twcable.com
Fri Jul 22 14:03:48 UTC 2016
Without knowing your environment, it is hard to say specifically how it should look. But to give an example.
Here I have a ceph_ssd backend defined. It contains all the information I would need to connect and use the backend
[ceph_ssd]
volume_backend_name=ceph_ssd
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=volumes
rbd_pool=ssd_volumes
rbd_max_clone_depth=2
rbd_flatten_volume_from_snapshot=True
rbd_secret_uuid=<redacted>
backend_host=<redacted>
After you have this up, you will see the volume service go active in your cinder-volume.log. At that point you can define your volume type and then add the extra specs tying that volume type to the backend. Something like this
volume_backend_name ceph-ssd
Also, you can always use your cinder service-list to find out the status of your backend you defined.
In my lab I would see this:
| cinder-volume | <redacted>@ceph_ssd | nova | enabled | up | 2016-07-22T14:02:59.000000 | - |
Craig DeLatte
Cloud Engineering & Operations – OpenStack DevOps
Charter
704-731-3356
610-306-4816
From: Alexandra Kisin
Date: Wednesday, July 20, 2016 at 9:47 AM
To: Time Warner Cable
Cc: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>"
Subject: Re: [Openstack-operators] Cinder attach volume
Thank you for the prompt respond.
How should [SATA] section look in /etc/cinder/cinder.conf file ?
Regards,
Alexandra Kisin
Servers & Network group, IBM R&D Labs in Israel
Unix & Virtualization Team
________________________________
Phone: +972-48296172 | Mobile: +972-54-6976172 | Fax: +972-4-8296111
[cid:_2_0EC6EFE80EC6ED7C004AB8FCC2257FF6]
[IBM]
From: "Delatte, Craig" <craig.delatte at twcable.com<mailto:craig.delatte at twcable.com>>
To: Alexandra Kisin/Haifa/IBM at IBMIL, "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Date: 20/07/2016 04:13 PM
Subject: Re: [Openstack-operators] Cinder attach volume
________________________________
So the volume type error will require a copy of your cinder.conf. Or you can just verify you have a [SATA] section in it.
Also, volume type to my knowledge won’t change how libvirt presents the volume to the instance. I am not sure it can be changed, or at least never had to explore trying to do what you are needing to do.
Craig DeLatte
Cloud Engineering & Operations – OpenStack DevOps
Charter
704-731-3356
610-306-4816
From: Alexandra Kisin
Date: Wednesday, July 20, 2016 at 9:02 AM
To: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>"
Subject: [Openstack-operators] Cinder attach volume
Resent-From: Time Warner Cable
Hello.
I'm working on Liberty Openstack environment using ibm.storwize_svc.StorwizeSVCDriver as volume_drivert.
When I'm creating a new volume and attaching it to the VM - by default it will be /dev/vdX on the instance using virtio driver.
And this way everything is working fine.
But I need it to be /dev/sdX for my application needs.
I tried to create a new type of the volume by running :
cinder type-create SATA
cinder type-key SATA set volume_backend_name=sata
But when I start a volume create process , it fails - the new volume has error state and the error in cinder-scheduler.log is :
ERROR cinder.scheduler.flows.create_volume Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid host was found. No weighed hosts available
Please advise how a volume can be attached as /dev/sdX device and not /dev/vdX.
The solution of stop the VM first and only then attach a volume can also be taken in account.
Thank you.
Regards,
Alexandra Kisin
Servers & Network group, IBM R&D Labs in Israel
Unix & Virtualization Team
________________________________
Phone:+972-48296172 | Mobile:+972-54-6976172 | Fax:+972-4-8296111
[cid:_2_0EC757A40EC944C4004AB8FCC2257FF6]
[IBM]
________________________________
This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
[attachment "ATT00001.jpg" deleted by Alexandra Kisin/Haifa/IBM] [attachment "ATT00002.gif" deleted by Alexandra Kisin/Haifa/IBM]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160722/b25f6fcb/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00001.jpg
Type: image/jpeg
Size: 4292 bytes
Desc: ATT00001.jpg
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160722/b25f6fcb/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00002.gif
Type: image/gif
Size: 360 bytes
Desc: ATT00002.gif
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160722/b25f6fcb/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00003.jpg
Type: image/jpeg
Size: 4292 bytes
Desc: ATT00003.jpg
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160722/b25f6fcb/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00004.gif
Type: image/gif
Size: 360 bytes
Desc: ATT00004.gif
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160722/b25f6fcb/attachment-0001.gif>
More information about the OpenStack-operators
mailing list