[Openstack] cinder volume create error, oslo service killed by signal 11
yang sheng
forsaks.30 at gmail.com
Mon Jan 23 16:27:02 UTC 2017
Hi ALL
our testing environment (liberty with ceph) has been running for a while
and everything was working properly.
There was a cinder volume error happened yesterday.
from volume.log, it is showing:
2017-01-22 16:17:28.416 30027 INFO
cinder.volume.flows.manager.create_volume
[req-61267bd5-bfae-4f49-b21f-21eee2e49ea6 00b221e9dbac43c0b48b844e7ef1d835
42a0d7dfbe944f8b88964646ceccefa5 - - -] Volume
21cccf9f-53ab-47b4-a0f7-128a3a897c8d: being created as image with
specification: {'status': u'creating', 'image_location':
(u'rbd://cf59374d-9745-4274-a5c6-34fcea3203d7/eosimages/9e6cc09c-1ad5-4037-901a-4cf13135d2b8/snap',
None), 'volume_size': 20, 'volume_name':
u'volume-21cccf9f-53ab-47b4-a0f7-128a3a897c8d', 'image_id':
u'9e6cc09c-1ad5-4037-901a-4cf13135d2b8', 'image_service':
<cinder.image.glance.GlanceImageService object at 0x6694bd0>, 'image_meta':
{u'status': u'active', u'virtual_size': None, u'name': u'CentOS 6 64bit',
u'tags': [], u'container_format': u'bare', u'created_at':
datetime.datetime(2017, 1, 3, 23, 20, 30, tzinfo=<iso8601.Utc>),
u'disk_format': u'raw', u'updated_at': datetime.datetime(2017, 1, 9, 15,
19, 45, tzinfo=<iso8601.Utc>), u'visibility': u'public', 'properties': {},
u'owner': u'd0b10188264444cb95e10fbe75e47cb2', u'protected': True, u'id':
u'9e6cc09c-1ad5-4037-901a-4cf13135d2b8', u'file':
u'/v2/images/9e6cc09c-1ad5-4037-901a-4cf13135d2b8/file', u'checksum':
u'9a44adfc62adf520e63298dac0bda27f', u'min_disk': 0, u'direct_url':
u'rbd://cf59374d-9745-4274-a5c6-34fcea3203d7/eosimages/9e6cc09c-1ad5-4037-901a-4cf13135d2b8/snap',
u'min_ram': 0, u'size': 8589934592}}
2017-01-22 16:17:28.526 30000 INFO oslo_service.service
[req-f5ba453f-fcc3-4de4-959f-f62e06d9c425 - - - - -] Child 30027 killed by
signal 11
2017-01-22 16:17:28.529 30000 INFO oslo_service.service
[req-f5ba453f-fcc3-4de4-959f-f62e06d9c425 - - - - -] Started child 14043
2017-01-22 16:17:28.531 14043 INFO cinder.service [-] Starting
cinder-volume node (version 7.0.3)
2017-01-22 16:17:28.532 14043 INFO cinder.volume.manager
[req-ee1c544d-44a2-461a-9ed8-cd1b2a1611f6 - - - - -] Starting volume driver
RBDDriver (1.2.0)
2017-01-22 16:17:34.724 14043 WARNING cinder.volume.manager
[req-ee1c544d-44a2-461a-9ed8-cd1b2a1611f6 - - - - -] Detected volume stuck
in {'curr_status': u'creating'}(curr_status)s status, setting to ERROR.
Seems cinder volume service got killed and restarted.
is there any suggestion to avoid this error?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170123/fee5445d/attachment.html>
More information about the Openstack
mailing list