[Openstack] [Cinder] command to list volume-group detected/managed by cinder

Marco CONSONNI mcocmo62 at gmail.com
Wed Dec 5 07:45:47 UTC 2012

Hello Ahmed,

it seems like we are facing the same problems!

I was able to install cinder and you can find the instructions here

The instructions still have some errors and/or miss some info you can find
reading these bugs that are still open:

- https://bugs.launchpad.net/openstack-manuals/+bug/1078057
- https://bugs.launchpad.net/openstack-manuals/+bug/1078353

This installation uses tgt and I experimented only that one.

To my knowledge, there's no cinder command for listing or showing the
volume group cinder is currently using. In my opinion, this makes sense:
the volume group is something that makes sense only in the case you are
using tgt as virtualization technology for supporting virtual volumes but
this is incidental.
In case, in the future, you use a different technology, maybe the volume
group concept disappears.
I mean, tgt (the enabling technology) needs a volume group for working but
the purpose of cinder is to provide volumes to users independently on how
they are implemented.

Anyway, if you what a complete list of cinder commands, just type

*$ cinder help*

In case you want to know more on a single command (the line above gives the
whole list), type:

*$ cinder help <command>*

For example

*$ cinder help show*

gives information on the syntax and the semantics of command "cinder show".

BTW: this instructions are also valid for other CLIs like nova, keystone
and quantum.

Adding a new volume group in addition to cinder-volumes?

To be honest I don't know. It depends on how tgt works (

On the other hand, I think that if your goal is to add storage to an
already in place infrastructure, then you probably need to add physical
volumes to cinder-volumes volume group.
I'm not an expert on this field, but I think you've got to investigate vg
(volume group) commands like vgscan, vgextend (I presume this is the one
you need to use for adding storage to an existing volume group), etc...

Caveat: read very carefully this bug report


On Wed, Dec 5, 2012 at 12:02 AM, Ahmed Al-Mehdi <ahmedalmehdi at gmail.com>wrote:

> Hello,
> In my setup, cinder.conf is set as follows:
> root at novato:/etc/cinder# cat cinder.conf
> rootwrap_config=/etc/cinder/rootwrap.conf
> sql_connection = mysql://cinderUser:cinderPass@
> api_paste_confg = /etc/cinder/api-paste.ini
> iscsi_helper=ietadm
> volume_name_template = volume-%s
> volume_group = cinder-volumes
> verbose = True
> auth_strategy = keystone
> #osapi_volume_listen_port=5900
> root at novato:/etc/cinder# grep -nsir cinder  /etc/iet/*
> root at novato:/etc/cinder# grep -nsir cinder  /etc/tgt/*
> /etc/tgt/conf.d/cinder_tgt.conf:1:include /var/lib/cinder/volumes/*
> I have a few cinder related questions:
> - Which iSCSI target is officially supported / tested in Folsom release -
> IET (http://sourceforge.net/projects/iscsitarget/files/)  or tgt (
> http://sourceforge.net/projects/iscsitarget/files/).  If both are
> supported, what is the appropriate value for  "volume_driver" and
> "iscsi_helper" in cinder.conf  for either of them.  Any docs explaining
> this?
> - Will Cinder create an internal ID (representation) for volume_group -
> "cinder-volumes"?
> - What cinder cli command can I issue to get info on the volume_group -
> "cinder-volumes"?
> - Is there a way I can add an additional volume-group, e.g
> "cinder-volumes2"?
> I read through the doc at
> http://docs.openstack.org/folsom/openstack-compute/admin/content/ch_volumes.html,
> but did not find answers to the above.
> Thank you very much in advance.
> Regards,
> Ahmed.
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121205/20f66187/attachment.html>

More information about the Openstack mailing list