[kolla][ceph] Cache OSDs didn't stay in the root=cache after ceph deployment.

Eddie Yen missile0407 at gmail.com
Mon Jul 1 11:23:19 UTC 2019


Hi,

I'm using stable/rocky to try ceph cache tiering.
Now I'm facing a one issue.

I chose one SSD to become cache tier disk. And set below options in
globals.yml.
ceph_enable_cache = "yes"
ceph_target_max_byte= "<size num>"
ceph_target_max_objects = "<object num>"
ceph_cache_mode = "writeback"

And the default OSD type is bluestore.


It will bootstrap the cache disk and create another OSD container.
And also create the root bucket called "cache". then set the cache rule to
every cache pools.
The problem is, that OSD didn't stay at "cache" bucket, it still stay at
"default" bucket.
That caused the services can't access to the Ceph normally. Especially
deploying Gnocchi.

When error occurred, I manually set that OSD to the cache bucket then
re-deploy, and everything is normal now.
But still a strange issue that it stay in the wrong bucket.

Did I miss something during deployment? Or what can I do?


Many thanks,
Eddie.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190701/a0ed11e2/attachment.html>


More information about the openstack-discuss mailing list