Multiple backend for Openstack
Hi All Good Day
From documentation we are able to understand that we can configure multiple backends for cinder (Ceph ,External Storage device or NFS etc) Is there any way we can choose the backend while instance launching .Say instance1 backend should be from an external storage device like EMC and instance2 to launch and have backend volume from ceph .Can this be achieved using cinder availability zone implementation or any other way?
I have gone through below link (*section Configure Block Storage scheduler multi back end)* https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.ht... Suggestions please .. Regards, Deepa K R
On Wed, 2020-10-14 at 18:38 +0530, Deepa KR wrote:
Hi All
Good Day
From documentation we are able to understand that we can configure multiple backends for cinder (Ceph ,External Storage device or NFS etc) Is there any way we can choose the backend while instance launching .Say
instance1 backend should be from an external storage device like EMC and instance2 to launch and have backend volume from ceph kind of you can use volume types to do this indrectly. end users shoudl generally not be aware if its ceph or an emc san you less you name the volumens tyeps "cpeh" and "emc" but that is a operator choice. you can map the volume type to specific backend using there config file.
.Can this be achieved using cinder availability zone implementation or any other way?
I have gone through below link (*section Configure Block Storage scheduler multi back end)*
https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.ht...
well that basically covers what you have to do your cinder config for lvm emc and ceph might look like this [DEFAULT] enabled_backends=lvm,emc,ceph [lvm] volume_group = cinder-volume-1 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver [emc] use_multipath_for_image_xfer = true volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver volume_backend_name = emcfc [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 or you might have 3 diffent config one for each you then create 3 different volume types and assocatie each with a backend openstack --os-username admin --os-tenant-name admin volume type create lvm openstack --os-username admin --os-tenant-name admin volume type create emc openstack --os-username admin --os-tenant-name admin volume type create ceph openstack --os-username admin --os-tenant-name admin volume type set lvm --property volume_backend_name=lvm openstack --os-username admin --os-tenant-name admin volume type set emc --property volume_backend_name=emc openstack --os-username admin --os-tenant-name admin volume type set ceph --property volume_backend_name=ceph then you can create volumes with those types openstack volume create --size 1 --type lvm my_lvm_volume openstack volume create --size 1 --type ceph my_ceph_volume and boot a server with them. openstack server create --volume my_lvm_volume --volume my_ceph_volume ... if you have cross az attach disabel in nova and you want all type/backend to be accesable in all AZ you need to deploy a instance of the cinder volume driver for each of the tyeps in each az. i dont know if that answers your question or not but volume types are the way to requst a specific backend at the api level.
Suggestions please ..
Regards, Deepa K R
Hello Sean Thanks much for the detailed response. I am still unclear on below points if you have cross az attach disable in nova and you want all type/backend to be accessible in all AZ you need to deploy a instance of the cinder volume driver for each of the types in each az. "deploy a instance of the cinder volume driver for each of the types in each az" <<<< How can i achieve above mentioned thing i don't know if that answers your question or not but volume types are the way to request a specific backend at the api level. <<< i see volume type option while creating volume .. But when I am trying to launch an instance I don't see that parameter in Horizon .. may be in the command line we have (not sure though) . Is this something that needs to be handled at AZ level ? On Wed, Oct 14, 2020 at 7:29 PM Sean Mooney <smooney@redhat.com> wrote:
On Wed, 2020-10-14 at 18:38 +0530, Deepa KR wrote:
Hi All
Good Day
From documentation we are able to understand that we can configure multiple backends for cinder (Ceph ,External Storage device or NFS etc) Is there any way we can choose the backend while instance launching .Say
instance1 backend should be from an external storage device like EMC and instance2 to launch and have backend volume from ceph kind of you can use volume types to do this indrectly. end users shoudl generally not be aware if its ceph or an emc san you less you name the volumens tyeps "cpeh" and "emc" but that is a operator choice. you can map the volume type to specific backend using there config file.
.Can this be achieved using cinder availability zone implementation or any other way?
I have gone through below link (*section Configure Block Storage scheduler multi back end)*
https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.ht...
well that basically covers what you have to do
your cinder config for lvm emc and ceph might look like this
[DEFAULT] enabled_backends=lvm,emc,ceph
[lvm] volume_group = cinder-volume-1 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
[emc] use_multipath_for_image_xfer = true volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver volume_backend_name = emcfc
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
or you might have 3 diffent config one for each
you then create 3 different volume types and assocatie each with a backend
openstack --os-username admin --os-tenant-name admin volume type create lvm openstack --os-username admin --os-tenant-name admin volume type create emc openstack --os-username admin --os-tenant-name admin volume type create ceph
openstack --os-username admin --os-tenant-name admin volume type set lvm --property volume_backend_name=lvm openstack --os-username admin --os-tenant-name admin volume type set emc --property volume_backend_name=emc openstack --os-username admin --os-tenant-name admin volume type set ceph --property volume_backend_name=ceph
then you can create volumes with those types
openstack volume create --size 1 --type lvm my_lvm_volume openstack volume create --size 1 --type ceph my_ceph_volume
and boot a server with them.
openstack server create --volume my_lvm_volume --volume my_ceph_volume ...
if you have cross az attach disabel in nova and you want all type/backend to be accesable in all AZ you need to deploy a instance of the cinder volume driver for each of the tyeps in each az.
i dont know if that answers your question or not but volume types are the way to requst a specific backend at the api level.
Suggestions please ..
Regards, Deepa K R
-- Regards, Deepa K R | DevOps Team Lead USA | UAE | INDIA | AUSTRALIA
Hey Deepa, We have done this in the past with an old HP MSA appliance and ceph. Deploying this with cross az disabled will work fine, as long as the compute nodes have access to the storage backend and the necessary config is done. In order to create a volume type, you need to go to the admin section in the dashboard or via the Cli and create a new volume type. Then add extra specs and add backend_name = the name you gave it on the cinder config. Then you can select where you want to create the volume from the drop down or via cli. Bear in mind you need to create the volume type for each backen of you make more than one available. Hope this helps. //florian
On 14. Oct 2020, at 20.13, Deepa KR <deepa.kr@fingent.com> wrote:
Hello Sean
Thanks much for the detailed response.
I am still unclear on below points
if you have cross az attach disable in nova and you want all type/backend to be accessible in all AZ you need to deploy a instance of the cinder volume driver for each of the types in each az.
"deploy a instance of the cinder volume driver for each of the types in each az" <<<< How can i achieve above mentioned thing
i don't know if that answers your question or not but volume types are the way to request a specific backend at the api level. <<< i see volume type option while creating volume .. But when I am trying to launch an instance I don't see that parameter in Horizon .. may be in the command line we have (not sure though) . Is this something that needs to be handled at AZ level ?
On Wed, Oct 14, 2020 at 7:29 PM Sean Mooney <smooney@redhat.com> wrote: On Wed, 2020-10-14 at 18:38 +0530, Deepa KR wrote:
Hi All
Good Day
From documentation we are able to understand that we can configure multiple backends for cinder (Ceph ,External Storage device or NFS etc) Is there any way we can choose the backend while instance launching .Say
instance1 backend should be from an external storage device like EMC and instance2 to launch and have backend volume from ceph kind of you can use volume types to do this indrectly. end users shoudl generally not be aware if its ceph or an emc san you less you name the volumens tyeps "cpeh" and "emc" but that is a operator choice. you can map the volume type to specific backend using there config file.
.Can this be achieved using cinder availability zone implementation or any other way?
I have gone through below link (*section Configure Block Storage scheduler multi back end)*
https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.ht...
well that basically covers what you have to do
your cinder config for lvm emc and ceph might look like this
[DEFAULT] enabled_backends=lvm,emc,ceph
[lvm] volume_group = cinder-volume-1 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
[emc] use_multipath_for_image_xfer = true volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver volume_backend_name = emcfc
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
or you might have 3 diffent config one for each
you then create 3 different volume types and assocatie each with a backend
openstack --os-username admin --os-tenant-name admin volume type create lvm openstack --os-username admin --os-tenant-name admin volume type create emc openstack --os-username admin --os-tenant-name admin volume type create ceph
openstack --os-username admin --os-tenant-name admin volume type set lvm --property volume_backend_name=lvm openstack --os-username admin --os-tenant-name admin volume type set emc --property volume_backend_name=emc openstack --os-username admin --os-tenant-name admin volume type set ceph --property volume_backend_name=ceph
then you can create volumes with those types
openstack volume create --size 1 --type lvm my_lvm_volume openstack volume create --size 1 --type ceph my_ceph_volume
and boot a server with them.
openstack server create --volume my_lvm_volume --volume my_ceph_volume ...
if you have cross az attach disabel in nova and you want all type/backend to be accesable in all AZ you need to deploy a instance of the cinder volume driver for each of the tyeps in each az.
i dont know if that answers your question or not but volume types are the way to requst a specific backend at the api level.
Suggestions please ..
Regards, Deepa K R
--
Regards, Deepa K R | DevOps Team Lead
<logo_for_signature.png>
USA | UAE | INDIA | AUSTRALIA
<signature-1.gif>
Good Day
From documentation we are able to understand that we can configure multiple backends for cinder (Ceph ,External Storage device or NFS etc) Is there any way we can choose the backend while instance launching .Say instance1 backend should be from an external storage device like EMC and instance2 to launch and have backend volume from ceph .Can this be achieved using cinder availability zone implementation or any other way?
I have gone through below link (*section Configure Block Storage scheduler multi back end)*
https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.ht... <https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html>
Suggestions please ..
Regards, Deepa K R
Hi Deepa, Backend selection is controlled by the scheduler, with the end user being able to choose different volume types configured by the administrator to tell the scheduler what characteristics are needed for the selected storage. The Volume Type section in the Multibackend docs briefly describe some of this: https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.ht... These volume types can be configured with extra specs the explicitly declare a specific backend name to use for creating volumes of that type, or it can contain extra specs that just define other properties (such as just stating the protocol needs to be iscsi) and the scheduler will use those to decide where to place the volume. Some description of defining availability zones in extra specs (which I'm seeing could really use some updates) can be found here: https://docs.openstack.org/cinder/latest/admin/blockstorage-availability-zon... From the command line, you can also explicitly state which availability zone you want the volume created in. See bullets 2 and 3 here: https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html#create-... Good luck! Sean
participants (4)
-
Deepa KR
-
Florian Rommel
-
Sean McGinnis
-
Sean Mooney