[Openstack] Unable to start radosgw
Vivek Varghese Cherian
vivekcherian at gmail.com
Tue Dec 9 18:36:59 UTC 2014
Hi,
I am trying to integrate OpenStack Juno Keystone with the Ceph Object
Gateway(radosw).
I want to use keystone as the users authority. A user that keystone
authorizes to access the gateway will also be created on the radosgw.
Tokens that keystone validates will be considered as valid by the rados
gateway.
I am using the URL http://ceph.com/docs/master/radosgw/keystone/ as my
reference.
I have deployed a 4 node ceph cluster running on Ubuntu 14.04
Host1: ppm-c240-admin.xyz.com (10.x.x.123)
Host2: ppm-c240-ceph1.xyz.com (10.x.x.124)
Host3: ppm-c240-ceph2.xyz.com (10.x.x.125)
Host4: ppm-c240-ceph3.xyz.com (10.x.x.126)
ppm-c240-ceph3.xyz.com is the radosgw host, the radosgw service has
stopped working and I am able to start it using /etc/init.d/radosgw start
My /etc/ceph/ceph.conf on all the 4 nodes is as follows,
root at ppm-c240-ceph3:~# cat /etc/ceph/ceph.conf
[global]
fsid = df18a088-2a70-43f9-b07f-ce8cf7c3349c
mon_initial_members = ppm-c240-admin, ppm-c240-ceph1, ppm-c240-ceph2
mon_host = 10.x.x.123,10.x.x.124,10.x.x.125
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = 10.x.x.0/24
cluster_network = 192.168.0.0/24
osd_pool_default_pg_num = 512
osd_pool_default_pgp_num = 512
debug rgw = 20
[osd]
osd_journal_size = 10000
[osd.0]
osd_host = ppm-c240-admin
public_addr = 10.x.x.123
cluster_addr = 192.168.0.10
[osd.1]
osd_host = ppm-c240-admin
public_addr = 10.x.x.123
cluster_addr = 192.168.0.10
[osd.2]
osd_host = ppm-c240-admin
public_addr = 10.x.x.123
cluster_addr = 192.168.0.10
[osd.3]
osd_host = ppm-c240-ceph1
public_addr = 10.x.x.124
cluster_addr = 192.168.0.11
[osd.4]
osd_host = ppm-c240-ceph1
public_addr = 10.x.x.124
cluster_addr = 192.168.0.11
[osd.5]
osd_host = ppm-c240-ceph1
public_addr = 10.x.x.124
cluster_addr = 192.168.0.11
[osd.6]
osd_host = ppm-c240-ceph2
public_addr = 10.x.x.125
cluster_addr = 192.168.0.12
[osd.7]
osd_host = ppm-c240-ceph2
public_addr = 10.x.x.125
cluster_addr = 192.168.0.12
[osd.8]
osd_host = ppm-c240-ceph2
public_addr = 10.x.x.125
cluster_addr = 192.168.0.12
[osd.9]
osd_host = ppm-c240-ceph3
public_addr = 10.x.x.126
cluster_addr = 192.168.0.13
[osd.10]
osd_host = ppm-c240-ceph3
public_addr = 10.x.x.126
cluster_addr = 192.168.0.13
[osd.11]
osd_host = ppm-c240-ceph3
public_addr = 10.x.x.126
cluster_addr = 192.168.0.13
[client.radosgw.gateway]
host = ppm-c240-ceph3
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw keystone url = 10.x.x.175:35357
rgw keystone admin token = xyz123
rgw keystone accepted roles = Member, admin
rgw keystone token cache size = 10000
rgw keystone revocation interval = 15 * 60
rgw s3 auth use keystone = true
nss db path = /var/lib/nssdb
root at ppm-c240-ceph3:~#
I am including the coredump for reference:
root at ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d
log-to-stderr
2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 5958
common/ceph_crypto.cc: In function 'void ceph::crypto::init(CephContext*)'
thread 7f073f6457c0 time 2014-12-09 12:51:31.412682
common/ceph_crypto.cc: 54: FAILED assert(s == SECSuccess)
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
1: (()+0x293ce8) [0x7f073e797ce8]
2: (common_init_finish(CephContext*, int)+0x10) [0x7f073e76afa0]
3: (main()+0x340) [0x4665a0]
4: (__libc_start_main()+0xf5) [0x7f073c932ec5]
5: /usr/bin/radosgw() [0x4695c7]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
2014-12-09 12:51:31.413544 7f073f6457c0 -1 common/ceph_crypto.cc: In
function 'void ceph::crypto::init(CephContext*)' thread 7f073f6457c0 time
2014-12-09 12:51:31.412682
common/ceph_crypto.cc: 54: FAILED assert(s == SECSuccess)
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
1: (()+0x293ce8) [0x7f073e797ce8]
2: (common_init_finish(CephContext*, int)+0x10) [0x7f073e76afa0]
3: (main()+0x340) [0x4665a0]
4: (__libc_start_main()+0xf5) [0x7f073c932ec5]
5: /usr/bin/radosgw() [0x4695c7]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
--- begin dump of recent events ---
-13> 2014-12-09 12:51:31.407900 7f073f6457c0 5 asok(0xaf1180)
register_command perfcounters_dump hook 0xaf2c10
-12> 2014-12-09 12:51:31.407944 7f073f6457c0 5 asok(0xaf1180)
register_command 1 hook 0xaf2c10
-11> 2014-12-09 12:51:31.407953 7f073f6457c0 5 asok(0xaf1180)
register_command perf dump hook 0xaf2c10
-10> 2014-12-09 12:51:31.407961 7f073f6457c0 5 asok(0xaf1180)
register_command perfcounters_schema hook 0xaf2c10
-9> 2014-12-09 12:51:31.407992 7f073f6457c0 5 asok(0xaf1180)
register_command 2 hook 0xaf2c10
-8> 2014-12-09 12:51:31.407995 7f073f6457c0 5 asok(0xaf1180)
register_command perf schema hook 0xaf2c10
-7> 2014-12-09 12:51:31.407997 7f073f6457c0 5 asok(0xaf1180)
register_command config show hook 0xaf2c10
-6> 2014-12-09 12:51:31.408000 7f073f6457c0 5 asok(0xaf1180)
register_command config set hook 0xaf2c10
-5> 2014-12-09 12:51:31.408006 7f073f6457c0 5 asok(0xaf1180)
register_command config get hook 0xaf2c10
-4> 2014-12-09 12:51:31.408008 7f073f6457c0 5 asok(0xaf1180)
register_command log flush hook 0xaf2c10
-3> 2014-12-09 12:51:31.408011 7f073f6457c0 5 asok(0xaf1180)
register_command log dump hook 0xaf2c10
-2> 2014-12-09 12:51:31.408014 7f073f6457c0 5 asok(0xaf1180)
register_command log reopen hook 0xaf2c10
-1> 2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 5958
0> 2014-12-09 12:51:31.413544 7f073f6457c0 -1 common/ceph_crypto.cc:
In function 'void ceph::crypto::init(CephContext*)' thread 7f073f6457c0
time 2014-12-09 12:51:31.412682
common/ceph_crypto.cc: 54: FAILED assert(s == SECSuccess)
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
1: (()+0x293ce8) [0x7f073e797ce8]
2: (common_init_finish(CephContext*, int)+0x10) [0x7f073e76afa0]
3: (main()+0x340) [0x4665a0]
4: (__libc_start_main()+0xf5) [0x7f073c932ec5]
5: /usr/bin/radosgw() [0x4695c7]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
20/20 rgw
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
-2/-2 (syslog threshold)
99/99 (stderr threshold)
max_recent 10000
max_new 1000
log_file
--- end dump of recent events ---
terminate called after throwing an instance of 'ceph::FailedAssertion'
*** Caught signal (Aborted) **
in thread 7f073f6457c0
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
1: /usr/bin/radosgw() [0x5cb5cf]
2: (()+0x10340) [0x7f073d841340]
3: (gsignal()+0x39) [0x7f073c947f79]
4: (abort()+0x148) [0x7f073c94b388]
5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7f073d2536b5]
6: (()+0x5e836) [0x7f073d251836]
7: (()+0x5e863) [0x7f073d251863]
8: (()+0x5eaa2) [0x7f073d251aa2]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x1f2) [0x7f073e7575b2]
10: (()+0x293ce8) [0x7f073e797ce8]
11: (common_init_finish(CephContext*, int)+0x10) [0x7f073e76afa0]
12: (main()+0x340) [0x4665a0]
13: (__libc_start_main()+0xf5) [0x7f073c932ec5]
14: /usr/bin/radosgw() [0x4695c7]
2014-12-09 12:51:31.415630 7f073f6457c0 -1 *** Caught signal (Aborted) **
in thread 7f073f6457c0
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
1: /usr/bin/radosgw() [0x5cb5cf]
2: (()+0x10340) [0x7f073d841340]
3: (gsignal()+0x39) [0x7f073c947f79]
4: (abort()+0x148) [0x7f073c94b388]
5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7f073d2536b5]
6: (()+0x5e836) [0x7f073d251836]
7: (()+0x5e863) [0x7f073d251863]
8: (()+0x5eaa2) [0x7f073d251aa2]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x1f2) [0x7f073e7575b2]
10: (()+0x293ce8) [0x7f073e797ce8]
11: (common_init_finish(CephContext*, int)+0x10) [0x7f073e76afa0]
12: (main()+0x340) [0x4665a0]
13: (__libc_start_main()+0xf5) [0x7f073c932ec5]
14: /usr/bin/radosgw() [0x4695c7]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
--- begin dump of recent events ---
0> 2014-12-09 12:51:31.415630 7f073f6457c0 -1 *** Caught signal
(Aborted) **
in thread 7f073f6457c0
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
1: /usr/bin/radosgw() [0x5cb5cf]
2: (()+0x10340) [0x7f073d841340]
3: (gsignal()+0x39) [0x7f073c947f79]
4: (abort()+0x148) [0x7f073c94b388]
5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7f073d2536b5]
6: (()+0x5e836) [0x7f073d251836]
7: (()+0x5e863) [0x7f073d251863]
8: (()+0x5eaa2) [0x7f073d251aa2]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x1f2) [0x7f073e7575b2]
10: (()+0x293ce8) [0x7f073e797ce8]
11: (common_init_finish(CephContext*, int)+0x10) [0x7f073e76afa0]
12: (main()+0x340) [0x4665a0]
13: (__libc_start_main()+0xf5) [0x7f073c932ec5]
14: /usr/bin/radosgw() [0x4695c7]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
0/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
20/20 rgw
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
-2/-2 (syslog threshold)
99/99 (stderr threshold)
max_recent 10000
max_new 1000
log_file
--- end dump of recent events ---
Aborted (core dumped)
root at ppm-c240-ceph3:~#
Any pointers as to why this is happening is highly appreciated.
Regards,
--
Vivek Varghese Cherian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141210/3e850000/attachment.html>
More information about the Openstack
mailing list