[Openstack] Cinder question - deploying cinder-volume in 2nd server
goncalo
goncalo at lip.pt
Sat Mar 22 11:24:53 UTC 2014
Dear All...
I'm facing some problems while deploying a 2nd cinder volume in a server
different from the one where cinder scheduler is deployed (if this is
not the right forum to ask, please redirect me to the proper support
mailing list, TIA).
The volumes are perfectly deployed in the 1st cinder-volume instance
(where cinder-scheduler and cinder api are running) but on my 2nd
cinder-volume, the volume stays in creating state forever.
The setup is has following:
- I'me using HAVANA
- cloud03 is the server where cinder-scheduler, cinder-api and
cinder-volume is installed
- cloud04 is the server where only cinder-volume will be installed
- cloud03 and cloud04 are configured so that all connections are
accepted in the machines firewalls
Let me guide you through my configuration steps:
1) I've created a pv and vg in cloud04
# pvdisplay
--- Physical volume ---
PV Name /dev/loop0
VG Name cinder-volumes
PV Size 10.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 2559
Free PE 2559
Allocated PE 0
PV UUID RqQa2d-Fw3u-EdR4-Yr6Y-oZVv-fHVT-IkKxLT
# vgdisplay
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 10.00 GiB
PE Size 4.00 MiB
Total PE 2559
Alloc PE / Size 0 / 0
Free PE / Size 2559 / 10.00 GiB
VG UUID Uk5grr-qIWs-uQT2-1gRS-fRIa-4bd7-Ik66or
2) I've installed and configured cinder-volume. The definitions are:
# cat /etc/cinder/cinder.conf | grep -v "#"
[DEFAULT]
api_paste_config=api-paste.ini
state_path=/var/lib/cinder
rootwrap_config=/etc/cinder/rootwrap.conf
auth_strategy=keystone
volume_name_template=volume-%s
verbose=true
use_stderr=true
rpc_backend=cinder.openstack.common.rpc.impl_qpid
qpid_hostname=<cloud03 ip>
iscsi_ip_address=<cloud04 ip>
volume_clear=none
volume_group=cinder-volumes
sql_connection=mysql://cinder:xxxxxxxxxxxxxx@<cloud03 ip>/cinder
qpid_reconnect_timeout=0
qpid_reconnect_limit=0
qpid_reconnect=True
qpid_reconnect_interval_max=0
qpid_reconnect_interval_min=0
sql_idle_timeout=3600
qpid_reconnect_interval=0
notification_driver=cinder.openstack.common.notifier.rpc_notifier
# grep include /etc/tgt/targets.conf
include /etc/cinder/volumes/*
3) After restarting cinder and tgt, cinder seems to acknowledge the new
node
# cinder-manage host list
host zone
cloud03.my.domain nova
cloud04.my.domain nova
# cinder-manage service list
Binary Host Zone
Status State Updated At
cinder-volume cloud03 nova
enabled :-) 2014-03-22 10:41:12
cinder-scheduler cloud03 nova
enabled :-) 2014-03-22 10:41:05
cinder-backup cloud03 nova
enabled :-) 2014-03-22 10:41:05
cinder-volume cloud04 nova
enabled :-) 2014-03-22 10:41:03
4) However, when I try to create a new volume (in the case where all
volume space is already used in the working cloud03 machine) it never
leaves the creating status,
# cinder create 6
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-03-22T11:17:50.918263 |
| display_description | None |
| display_name | None |
| id | 7ab9f0c2-63d6-499f-be18-c6b5984fece2 |
| metadata | {} |
| size | 6 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
[root at cloud04 ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size
| Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 5eb182b3-fa19-42da-bb08-df3c54df3c2a | available | 15 | 15
| None | false | |
| 7ab9f0c2-63d6-499f-be18-c6b5984fece2 | creating | None | 6
| None | false | |
| a2423edb-3147-4082-98ae-23dde68fa19a | available | 1 | 1
| None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root at cloud04 ~(keystone_admin)]# cinder show
7ab9f0c2-63d6-499f-be18-c6b5984fece2
+--------------------------------+--------------------------------------+
| Property | Value
|
+--------------------------------+--------------------------------------+
| attachments | []
|
| availability_zone | nova
|
| bootable | false
|
| created_at | 2014-03-22T11:17:50.000000
|
| display_description | None
|
| display_name | None
|
| id | 7ab9f0c2-63d6-499f-be18-c6b5984fece2
|
| metadata | {}
|
| os-vol-host-attr:host | cloud04.ncg.ingrid.pt
|
| os-vol-mig-status-attr:migstat | None
|
| os-vol-mig-status-attr:name_id | None
|
| os-vol-tenant-attr:tenant_id | 5fa11a8e9a9b40ea887f6d31d299ba7c
|
| size | 6
|
| snapshot_id | None
|
| source_volid | None
|
| status | creating
|
| volume_type | None
|
+--------------------------------+--------------------------------------+
Looking to the volume log on cloud04, I see messages announcing that the
cinder-volume VG is available to allow the creation of new volumes:
2014-03-22 11:17:48.502 13634 DEBUG qpid.messaging [-] RACK[21c9290]:
Message({'oslo.message': '{"_context_roles": ["admin"],
"_context_request_id": "req-4641459f-92a2-4bbf-bd6b-206f60fc80ca",
"_context_quota_class": null, "_context_service_catalog": [],
"_context_tenant": null, "args": {"service_name": "volume", "host":
"cloud04.ncg.ingrid.pt", "capabilities": {"QoS_support": false,
"location_info":
"LVMVolumeDriver:cloud04.ncg.ingrid.pt:cinder-volumes:default:0",
"volume_backend_name": "LVM_iSCSI", "free_capacity_gb": 10.0,
"driver_version": "2.0.0", "total_capacity_gb": 10.0,
"reserved_percentage": 0, "vendor_name": "Open Source",
"storage_protocol": "iSCSI"}}, "_unique_id":
"2ae40e9c8ab74a33bf3287d219167707", "_context_timestamp":
"2014-03-22T11:17:48.492982", "_context_user_id": null,
"_context_project_name": null, "_context_read_deleted": "no",
"_context_auth_token": null, "namespace": null, "_context_is_admin":
true, "version": "1.0", "_context_project_id": null, "_context_user":
null, "method": "update_service_capabilities",
"_context_remote_address": null}', 'oslo.version': '2.0'}) msg_acked
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py:1269
but I do not see messages regarding the specific volume creation request
we just issued.
So, any help is appreciated at this point
TIA
Cheers
Goncalo
More information about the Openstack
mailing list