[Openstack] [CINDER]Cinder Volume Creation

Byron Briggs byron.briggs at ait.com
Wed Dec 4 19:43:50 UTC 2013


When running

 

cinder create --display_name test 10

 

>From the compute01NovaCompute.dmz-pod2 Node listed below. It is also running
xenapi for a xenserver.

( Control01,2,3 are an HA cluster sitting behind haproxy/keepalivd running
all the communication and schedulers  -all that is working well.)

 

 

I get this error in my cinder-volume.log on the control nodes(since they run
schedulers) There is nothing else to go off of other then "ERROR" on the
volume status.

 

2013-12-04 12:55:32  WARNING [cinder.scheduler.host_manager] service is down
or disabled.

2013-12-04 12:55:32    ERROR [cinder.scheduler.manager] Failed to
schedule_create_volume: No valid host was found.  

 

 

In case you can't see excel  below.

192.168.220.40 Is the proxy being distributed into the three control nodes.

 

control01.dmz-pod2(192.168.220.41) ->cinder-api,cinder-scheduler

control02.dmz-pod2(192.168.220.42) ->cinder-api,cinder-scheduler

control03.dmz-pod2(192.168.220.43) ->cinder-api,cinder-scheduler

compute01NovaCompute.dmz-pod2(192.168.220.101) ->cinder-volume

 

 

 


Server Name

cinder-api

cinder-scheduler

cinder-volume


control01.dmz-pod2

YES

YES

NO


control02.dmz-pod2

YES

YES

NO


control03.dmz-pod2

YES

YES

NO


compute01NovaCompute.dmz-pod2

NO

NO

NO

 

All services are running with no log errors.

 

Here is my configs

compute01NovaCompute.dmz-pod2

/etc/cinder/api-paste.ini

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

service_protocol = http

service_host = %SERVICE_TENANT_NAME%

service_port = 5000

auth_host = 127.0.0.1

auth_port = 35357

auth_protocol = http

admin_tenant_name = services

admin_user = %SERVICE_USER%

admin_password = %SERVICE_PASSWORD%

signing_dir = /var/lib/cinder

 

/etc/cinder/cinder.conf

[DEFAULT]

iscsi_ip_address=192.168.220.101

rabbit_ha_queues=True

rabbit_hosts=control01:5672,control02:5672,control03:5672

rabbit_userid=openstack_rabbit_user

rabbit_password=openstack_rabbit_password

sql_connection = mysql://cinder:cinder_pass@192.168.220.40/cinder

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = volume-%s

volume_group = cinder-volumes

verbose = True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes

 

pvscan

File descriptor 3 (/usr/share/bash-completion/completions) leaked on pvscan
invocation. Parent PID 10778: -bash

  PV /dev/xvdb   VG cinder-volumes   lvm2 [100.00 GiB / 100.00 GiB free]

  Total: 1 [100.00 GiB] / in use: 1 [100.00 GiB] / in no VG: 0 [0   ]

 

 

control01,2,3

/etc/cinder/api-paste.ini

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

service_protocol = http

service_host = 192.168.220.40

service_port = 5000

auth_host = 192.168.220.40

auth_port = 35357

auth_protocol = http

admin_tenant_name = services

admin_user = cinder

admin_password = keystone_admin

signing_dir = /var/lib/cinder

 

/etc/cinder/conder.conf

[DEFAULT]

sql_idle_timeout=30 

rabbit_ha_queues=True

rabbit_hosts=control01:5672,control02:5672,control03:5672

rabbit_userid=openstack_rabbit_user

rabbit_password=openstack_rabbit_password

sql_connection = mysql://cinder:cinder_pass@192.168.220.40/cinder

osapi_volume_listen = 192.168.220.41

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = volume-%s

volume_group = nova-volumes

verbose = True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes

 

 

Grizzly Release

 

 

Any ideas on where to look more into the issue or something with my config?

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131204/75415af5/attachment.html>


More information about the Openstack mailing list