[Openstack] [CINDER]Cinder Volume Creation

Byron Briggs byron.briggs at ait.com
Wed Dec 4 21:55:03 UTC 2013


Nothing in the log really. From restarting the service to error all I get is

 

2013-12-04 16:51:40    AUDIT [cinder.service] SIGTERM received

2013-12-04 16:51:40    AUDIT [cinder.service] Starting cinder-scheduler node
(version 2013.1.4)

2013-12-04 16:51:40     INFO [cinder.openstack.common.rpc.common] Connected
to AMQP server on control01:5672

2013-12-04 16:51:40     INFO [cinder.openstack.common.rpc.common] Connected
to AMQP server on control01:5672

2013-12-04 16:52:30  WARNING [cinder.scheduler.host_manager] service is down
or disabled.

2013-12-04 16:52:30    ERROR [cinder.scheduler.manager] Failed to
schedule_create_volume: No valid host was found.

 

 

That is all I have, no stack trace or anything.

 

Same exact thing on all three controllers.

 

Byron

 

From: John Griffith [mailto:john.griffith at solidfire.com] 
Sent: Wednesday, December 04, 2013 4:41 PM
To: Byron Briggs
Cc: openstack at lists.openstack.org; SYSADMIN
Subject: Re: [Openstack] [CINDER]Cinder Volume Creation

 

 

 

On Wed, Dec 4, 2013 at 2:26 PM, Byron Briggs <byron.briggs at ait.com
<mailto:byron.briggs at ait.com> > wrote:

root at Compute01-CodeNode:/etc/cinder# vgs

File descriptor 3 (/usr/share/bash-completion/completions) leaked on vgs
invocation. Parent PID 10778: -bash

  VG             #PV #LV #SN Attr   VSize   VFree  

  cinder-volumes   1   0   0 wz--n- 100.00g 100.00g

 

 

My control nodes don't but that shouldn't matter from my understanding.

root at control01:/etc/cinder# vgs

  VG           #PV #LV #SN Attr   VSize   VFree 

  control01-vg   1   2   0 wz--n- 931.27g 44.00m

 

From: John Griffith [mailto:john.griffith at solidfire.com
<mailto:john.griffith at solidfire.com> ] 
Sent: Wednesday, December 04, 2013 3:20 PM
To: Byron Briggs
Cc: openstack at lists.openstack.org <mailto:openstack at lists.openstack.org> ;
SYSADMIN


Subject: Re: [Openstack] [CINDER]Cinder Volume Creation

 

 

 

On Wed, Dec 4, 2013 at 12:43 PM, Byron Briggs <byron.briggs at ait.com
<mailto:byron.briggs at ait.com> > wrote:

When running

 

cinder create --display_name test 10

 

>From the compute01NovaCompute.dmz-pod2 Node listed below. It is also running
xenapi for a xenserver.

( Control01,2,3 are an HA cluster sitting behind haproxy/keepalivd running
all the communication and schedulers  -all that is working well.)

 

 

I get this error in my cinder-volume.log on the control nodes(since they run
schedulers) There is nothing else to go off of other then "ERROR" on the
volume status.

 

2013-12-04 12:55:32  WARNING [cinder.scheduler.host_manager] service is down
or disabled.

2013-12-04 12:55:32    ERROR [cinder.scheduler.manager] Failed to
schedule_create_volume: No valid host was found.  

 

 

In case you can't see excel  below.

192.168.220.40 Is the proxy being distributed into the three control nodes.

 

control01.dmz-pod2(192.168.220.41) ->cinder-api,cinder-scheduler

control02.dmz-pod2(192.168.220.42) ->cinder-api,cinder-scheduler

control03.dmz-pod2(192.168.220.43) ->cinder-api,cinder-scheduler

compute01NovaCompute.dmz-pod2(192.168.220.101) ->cinder-volume

 

 

 


Server Name

cinder-api

cinder-scheduler

cinder-volume


control01.dmz-pod2

YES

YES

NO


control02.dmz-pod2

YES

YES

NO


control03.dmz-pod2

YES

YES

NO


compute01NovaCompute.dmz-pod2

NO

NO

NO

 

All services are running with no log errors.

 

Here is my configs

compute01NovaCompute.dmz-pod2

/etc/cinder/api-paste.ini

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

service_protocol = http

service_host = %SERVICE_TENANT_NAME%

service_port = 5000

auth_host = 127.0.0.1

auth_port = 35357

auth_protocol = http

admin_tenant_name = services

admin_user = %SERVICE_USER%

admin_password = %SERVICE_PASSWORD%

signing_dir = /var/lib/cinder

 

/etc/cinder/cinder.conf

[DEFAULT]

iscsi_ip_address=192.168.220.101

rabbit_ha_queues=True

rabbit_hosts=control01:5672,control02:5672,control03:5672

rabbit_userid=openstack_rabbit_user

rabbit_password=openstack_rabbit_password

sql_connection = mysql://cinder:cinder_pass@192.168.220.40/cinder
<http://cinder:cinder_pass@192.168.220.40/cinder> 

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = volume-%s

volume_group = cinder-volumes

verbose = True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes

 

pvscan

File descriptor 3 (/usr/share/bash-completion/completions) leaked on pvscan
invocation. Parent PID 10778: -bash

  PV /dev/xvdb   VG cinder-volumes   lvm2 [100.00 GiB / 100.00 GiB free]

  Total: 1 [100.00 GiB] / in use: 1 [100.00 GiB] / in no VG: 0 [0   ]

 

 

control01,2,3

/etc/cinder/api-paste.ini

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

service_protocol = http

service_host = 192.168.220.40

service_port = 5000

auth_host = 192.168.220.40

auth_port = 35357

auth_protocol = http

admin_tenant_name = services

admin_user = cinder

admin_password = keystone_admin

signing_dir = /var/lib/cinder

 

/etc/cinder/conder.conf

[DEFAULT]

sql_idle_timeout=30 

rabbit_ha_queues=True

rabbit_hosts=control01:5672,control02:5672,control03:5672

rabbit_userid=openstack_rabbit_user

rabbit_password=openstack_rabbit_password

sql_connection = mysql://cinder:cinder_pass@192.168.220.40/cinder
<http://cinder:cinder_pass@192.168.220.40/cinder> 

osapi_volume_listen = 192.168.220.41

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

iscsi_helper = tgtadm

volume_name_template = volume-%s

volume_group = nova-volumes

verbose = True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes

 

 

Grizzly Release

 

 

Any ideas on where to look more into the issue or something with my config?

 


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org
<mailto:openstack at lists.openstack.org> 
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 

So this happens in a couple of situations, the most common is when there's
not enough space/capacity being reported by the configured backend driver to
allocate the amount of space you're requesting.  Try a "sudo vgs" and verify
that you have enough capacity on your backing store (nova-volumes) to deploy
a 10G volume.

 

 

Ooops, sorry I didn't catch the separate controller node and only saw the
volume_group setting there.  Any chance you could link a paste-bin to the
cinder-scheduler logs?

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131204/c1fd807a/attachment.html>


More information about the Openstack mailing list