[Openstack] ceph and openstack

Martin Wilderoth martin.wilderoth at linserv.se
Tue Mar 8 08:29:18 UTC 2016


After loading RBD driver. Maybe my setup is incorrect.
Thanks

setup and error

rados pools

data
metadata
rbd
images
volumes
backups
vms


cinder-volume log

2016-03-08 07:39:38.713 7856 INFO cinder.service [-] Starting cinder-volume
node (version 7.0.1)
2016-03-08 07:39:38.715 7856 INFO cinder.volume.manager
[req-6e635315-b503-4e04-874e-53be817244ee - - - - -] Starting volume driver
RBDDriver (1.2.0)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
[req-6bc8b049-aafe-4302-8fe1-457dce30ed0f - - - - -] 'max_avail'
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup Traceback (most
recent call last):
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/oslo_service/threadgroup.py", line 154,
in wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     x.wait()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/oslo_service/threadgroup.py", line 51, in
wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     return
self.thread.wait()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in
wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     return
self._exit_event.wait()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     return
hubs.get_hub().switch()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     return
self.greenlet.switch()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in
main
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     result =
function(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 645, in
run_service
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
service.start()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/service.py", line 146, in start
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
self.manager.init_host()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in
wrapper
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     return
f(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 378, in
init_host
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
self.driver.init_capabilities()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in
wrapper
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     return
f(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/driver.py", line 662, in
init_capabilities
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     stats =
self.get_volume_stats(True)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in
wrapper
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup     return
f(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 420,
in get_volume_stats
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
self._update_volume_stats()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 405,
in _update_volume_stats
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
pool_stats['max_avail'] // units.Gi)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup KeyError:
'max_avail'
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
2016-03-08 07:39:38.781 5012 INFO oslo_service.service
[req-6bc8b049-aafe-4302-8fe1-457dce30ed0f - - - - -] Child 7856 exited with
status 0

I turned it of it was looping  forking to fast...

cinder.conf

[DEFAULT]
verbose=True
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.5.11
glance_host = controller
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = XXXXXXXXXXXXXXXXXXXXXXXXX
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2

[database]
connection = mysql+pymysql://cinder:XXXXXXXXXXXXXXXXXX@controller/cinder

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = XXXXXXXXXXXXXXXXXX

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 5f4e5210eeed8c900601


On 8 March 2016 at 09:10, Geo Varghese <gvarghese at aqorn.com> wrote:

> which error it is showing?
>
> On Tue, Mar 8, 2016 at 11:37 AM, Martin Wilderoth <
> martin.wilderoth at linserv.se> wrote:
>
>> Thanks Both,
>>
>> I will run it on controller node.
>>
>> My cinder-volume crashed.
>> Any dependensis or is my ceph cluster to old
>> (Im running dumpling) I will investigate
>>
>> Thanks
>>
>>
>> On 8 March 2016 at 06:32, Mike Smith <mismith at overstock.com> wrote:
>>
>>> If you are using Ceph as a Cinder backend, you would likely want to run
>>> cinder-volume on your controller node(s).   You could run it anywhere I
>>> suppose, including on the Ceph nodes themselves, but I’d recommend having
>>> it on the controllers.  Wherever you run it, you’d need a properly
>>> configured ceph.conf, and if you are using cephx authentication, you’d need
>>> the keyring files.  Your compute nodes would need that conf and keys also.
>>>
>>> You can also run Ceph for nova ephemeral disks without Cinder at all.
>>> You’d do that in nova.conf.
>>>
>>> We use both at Overstock.  Ceph for nova ephemeral for general use, and
>>> also Ceph as one option in a multi-backend Cinder configuration.  We also
>>> use it for a Glance store, which is a fantastic option because it makes
>>> disk provisioning for Nova instant, since you’re essentially snapshotting
>>> and image RBD into an RBD for Nova/Cinder.
>>>
>>> Mike Smith
>>> Lead Cloud Systems Architect
>>> Overstock.com
>>>
>>>
>>>
>>> On Mar 7, 2016, at 10:04 PM, Martin Wilderoth <
>>> martin.wilderoth at linserv.se> wrote:
>>>
>>>
>>> Hello
>>>
>>> Where should I run cinder-volume when i user ceph
>>> on the controller ?
>>> on the ceph mon or mds ?
>>> or ?
>>>
>>> Maybe it dosn't matter ?
>>>
>>> Thanks in advance
>>>
>>> Regards Martin
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> CONFIDENTIALITY NOTICE: This message is intended only for the use and
>>> review of the individual or entity to which it is addressed and may contain
>>> information that is privileged and confidential. If the reader of this
>>> message is not the intended recipient, or the employee or agent responsible
>>> for delivering the message solely to the intended recipient, you are hereby
>>> notified that any dissemination, distribution or copying of this
>>> communication is strictly prohibited. If you have received this
>>> communication in error, please notify sender immediately by telephone or
>>> return email. Thank you.
>>>
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
>
> --
> --
> Regards,
> Geo Varghese
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160308/3bfef4b1/attachment.html>


More information about the Openstack mailing list