안녕하세요 오픈스택관련 해서 문의 드립니다. 잠깐 한번만 봐주시면 감사하겠습니다.^^ cinder create 명령으로 생성한 새로운 볼륨의 상태가 creating 에서 계속 멈춰있습니다. 예상되는 문제 포인트 지점이 어디일까요? 해결을 못하겠네요...ㅠ 해결방법이나 접근방법을 아시면 답변 주시면 감사하겠습니다. 서버에서 확인한 로그와 구성정보 및 상태 함께 공유합니다. ● [오픈스택 구성] : 3-node configuration (Controller-node, Network-node, Compute-node(스토리지노드이기도함)) : ubuntu14.04, icehouse 버젼이 각 노드마다 설치되어 있으며, virtualbox에서 실행되고 있음 ● (On Controller node) # cinder create --display-name myVolume 1 # cinder list +--------------------------------------+----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------+--------------+------+-------------+----------+-------------+ | 4891a2e7-0cb2-4caf-9528-6612d8cf5f5a | creating | myVolume | 1 | None | false | | +--------------------------------------+----------+--------------+------+-------------+----------+-------------+ mysql> select status from volumes; +----------+ | status | +----------+ | creating | +----------+ mysql> select id from volumes; +--------------------------------------+ | id | +--------------------------------------+ | 4891a2e7-0cb2-4caf-9528-6612d8cf5f5a | +--------------------------------------+ # netstat -an | grep 8776 tcp 0 0 0.0.0.0:8776 0.0.0.0:* LISTEN # netstat -an | grep 5672 ..... tcp6 0 0 :::5672 :::* LISTEN tcp6 0 0 10.10.15.11:567210.10.15.31:33271 ESTABLISHED ..... # service cinder-api status cinder-api start/running, process 3733 # service cinder-scheduler status cinder-scheduler start/running, process 3755 *<Logs>* 명령어 실행할때 생성된 로그 *<cinder-api.log>* 2016-01-20 15:24:29.584 3739 INFO eventlet.wsgi.server [-] (3739) accepted ('10.10.15.11', 51845) 2016-01-20 15:24:29.588 3739 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.10.15.11 2016-01-20 15:24:29.874 3739 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.10.15.11 2016-01-20 15:24:30.029 3739 INFO cinder.api.openstack.wsgi [req-c7c065b5-3e4c-40c8-b011-24a1afbf6e50 5f8b76cf5986402fa6affdb4c9e2fc44 56ee36bd79724517bf115df7f3202f1d - - -] POST http://10.10.15.11:8776/v1/56ee36bd79724517bf115df7f3202f1d/volumes 2016-01-20 15:24:30.032 3739 AUDIT cinder.api.v1.volumes [req-c7c065b5-3e4c-40c8-b011-24a1afbf6e50 5f8b76cf5986402fa6affdb4c9e2fc44 56ee36bd79724517bf115df7f3202f1d - - -] Create volume of 1 GB 2016-01-20 15:24:30.212 3739 AUDIT cinder.api.v1.volumes [req-c7c065b5-3e4c-40c8-b011-24a1afbf6e50 5f8b76cf5986402fa6affdb4c9e2fc44 56ee36bd79724517bf115df7f3202f1d - - -] vol={'migration_status': None, 'availability_zone': 'nova', 'terminated_at': None, 'reservations': ['8339d8b8-70a5-4226-94a5-ad7c4f16d3f6', '6795bec3-0875-49cb-b538-c1e0f3b097c1'], 'updated_at': None, 'provider_geometry': None, 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': '4891a2e7-0cb2-4caf-9528-6612d8cf5f5a', 'size': 1, 'user_id': u'5f8b76cf5986402fa6affdb4c9e2fc44', 'attach_time': None, 'attached_host': None, 'display_description': None, 'volume_admin_metadata': [], 'encryption_key_id': None, 'project_id': u'56ee36bd79724517bf115df7f3202f1d', 'launched_at': None, 'scheduled_at': None, 'status': 'creating', 'volume_type_id': None, 'deleted': False, 'provider_location': None, 'host': None, 'source_volid': None, 'provider_auth': None, 'display_name': u'myVolume', 'instance_uuid': None, 'bootable': False, 'created_at': datetime.datetime(2016, 1, 20, 6, 24, 30, 108934), 'attach_status': 'detached', 'volume_type': None, '_name_id': None, 'volume_metadata': [], 'metadata': {}} 2016-01-20 15:24:30.217 3739 INFO cinder.api.openstack.wsgi [req-c7c065b5-3e4c-40c8-b011-24a1afbf6e50 5f8b76cf5986402fa6affdb4c9e2fc44 56ee36bd79724517bf115df7f3202f1d - - -] http://10.10.15.11:8776/v1/56ee36bd79724517bf115df7f3202f1d/volumes returned with HTTP 200 2016-01-20 15:24:30.226 3739 INFO eventlet.wsgi.server [req-c7c065b5-3e4c-40c8-b011-24a1afbf6e50 5f8b76cf5986402fa6affdb4c9e2fc44 56ee36bd79724517bf115df7f3202f1d - - -] 10.10.15.11 - - [20/Jan/2016 15:24:30] "POST /v1/56ee36bd79724517bf115df7f3202f1d/volumes HTTP/1.1" 200 602 0.639812 *<cinder-scheduler.log>* 계속 같은 로그가 반복되며 실행될때는 "2016-01-20 15:24:30.205" 로그가 한번가 찍혔음 2016-01-20 15:23:12.097 3755 WARNING cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'} 2016-01-20 15:24:12.105 3755 WARNING cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'} *2016-01-20 15:24:30.205 3755 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'5f8b76cf5986402fa6affdb4c9e2fc44', 'tenant': u'56ee36bd79724517bf115df7f3202f1d', 'user_identity': u'5f8b76cf5986402fa6affdb4c9e2fc44 56ee36bd79724517bf115df7f3202f1d - - -'}*2016-01-20 15:25:12.133 3755 WARNING cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'} *# cinder.conf* <controller node> [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper = tgtadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes rpc_backend = cinder.openstack.common.rpc.impl_kombu rabbit_host = 10.10.15.11 rabbit_port = 5672 rabbit_userid = guest rabbit_password = rabbitpass control_exchange = cinder notification_driver = cinder.openstack.common.notifier.rpc_notifier [database] connection = mysql://cinder:cinderdbpass@10.10.15.11/cinder [keystone_authtoken] auth_uri = http://10.10.15.11:5000 auth_host = 10.10.15.11 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = cinder admin_password = cinderpass --------------------------------------------------------------------------------------------------------------- ● (On Compute node) 스토리지노드 # vgs VG #PV #LV #SN Attr VSize VFree cinder-volumes 2 0 0 wz--n- 1.99g 1.99g # pvs PV VG Fmt Attr PSize PFree /dev/sdc cinder-volumes lvm2 a-- 1020.00m 1020.00m /dev/sdd cinder-volumes lvm2 a-- 1020.00m 1020.00m # service cinder-volume status cinder-volume start/running, process 1755 # service tgt status tgt start/running, process 1783 # /etc/lvm/lvm.conf ... filter = [ "a/sda1/", "a/sdc/", "a/sdd/", "r/.*/"] .... # netstat -an | grep 5672 tcp 0 0 10.10.15.31:33263 10.10.15.11:5672 ESTABLISHED *# cinder-volume.log * (명령수행과 관계없이 아래 로그만 반복됨.) 2016-01-20 15:12:12.112 2422 INFO cinder.volume.manager [-] Updating volume status 2016-01-20 15:13:12.139 2422 INFO cinder.volume.manager [-] Updating volume status 2016-01-20 15:14:12.107 2422 INFO cinder.volume.manager [-] Updating volume status 2016-01-20 15:15:12.147 2422 INFO cinder.volume.manager [-] Updating volume status 2016-01-20 15:16:12.121 2422 INFO cinder.volume.manager [-] Updating volume status *# cinder.conf* <compute node> [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf api_paste_confg = /etc/cinder/api-paste.ini iscsi_helper = tgtadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes # rpc_backend = cinder.openstack.common.rpc.impl_kombu rabbit_host = 10.10.15.11 rabbit_port = 5672 rabbit_userid = guest rabbit_password = rabbitpass # my_ip = 10.10.15.31 glance_host = 10.10.15.11 # [keystone_authtoken] auth_uri = http://10.10.15.11:5000 auth_host = 10.10.15.11 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = cinder admin_password = cinderpass # [database] connection = mysql://cinder:cinderdbpass@10.10.15.11/cinder # ls -al /var/lib/cinder/volumes total 8 drwxr-xr-x 2 cinder cinder 4096 9월 5 05:43 . drwxr-xr-x 3 cinder cinder 4096 1월 19 21:13 .. # ls -al /var/lock/cinder total 0 drwxr-xr-x 2 cinder root 40 1월 19 21:52 . drwxrwxrwt 7 root root 140 1월 19 21:53 .. # cat /admin-openrc.sh export OS_USERNAME=admin export OS_PASSWORD=adminpass export OS_TENANT_NAME=admin export OS_AUTH_URL=http://10.10.15.11:35357/v2.0