[kolla][ceph] Cache OSDs didn't stay in the root=cache after ceph deployment.
Eugen Block
eblock at nde.ag
Mon Jul 1 12:32:07 UTC 2019
Hi,
although I'm not familiar with kolla I can comment on the ceph part.
> The problem is, that OSD didn't stay at "cache" bucket, it still stay at
> "default" bucket.
I'm not sure how the deployment process with kolla works and what
exactly is done here, but this might be caused by this option [1]:
osd crush update on start
Its default is "true". We ran into this some time ago and were
wondering why the OSDs were in the wrong bucket everytime we restarted
services. As I said, I don't know how exactly this would affect you,
but you could set that config option to "false" and see if that still
happens.
Regards,
Eugen
[1] http://docs.ceph.com/docs/master/rados/operations/crush-map/
Zitat von Eddie Yen <missile0407 at gmail.com>:
> Hi,
>
> I'm using stable/rocky to try ceph cache tiering.
> Now I'm facing a one issue.
>
> I chose one SSD to become cache tier disk. And set below options in
> globals.yml.
> ceph_enable_cache = "yes"
> ceph_target_max_byte= "<size num>"
> ceph_target_max_objects = "<object num>"
> ceph_cache_mode = "writeback"
>
> And the default OSD type is bluestore.
>
>
> It will bootstrap the cache disk and create another OSD container.
> And also create the root bucket called "cache". then set the cache rule to
> every cache pools.
> The problem is, that OSD didn't stay at "cache" bucket, it still stay at
> "default" bucket.
> That caused the services can't access to the Ceph normally. Especially
> deploying Gnocchi.
>
> When error occurred, I manually set that OSD to the cache bucket then
> re-deploy, and everything is normal now.
> But still a strange issue that it stay in the wrong bucket.
>
> Did I miss something during deployment? Or what can I do?
>
>
> Many thanks,
> Eddie.
More information about the openstack-discuss
mailing list