[Openstack-operators] [nova-ceph-pools] one nova az with multiple ceph rbd pool

LIU Yulong jjj8593 at gmail.com
Thu Oct 13 11:00:32 UTC 2016


Hi all,

We are now facing a nova operation issue about setting different ceph rbd
pool to each corresponding nova compute node in one available zone. For
instance:
(1) compute-node-1  in az1 and set images_rbd_pool=pool1
(2) compute-node-2  in az1 and set images_rbd_pool=pool2
This setting can normally work fine.

But problem encountered when doing resize instance. We try to resize a
instance-1 originally in compute-node-1, then nova will do schedule
procedure, assuming that nova-scheduler get the chosen compute node is
compute-node-2. Then the nova will get the following error:
http://paste.openstack.org/show/585540/. This exception is because that in
compute-node-2 nova can't find pool1 vm1 disk. So is there a way nova can
handle this? Similar thing in cinder, you may see a cinder volume has host
attribute like:
host_name at pool_name#ceph.

Why we use such setting is because that while doing storage capacity
expansion we want to avoid the influence of ceph rebalance.

One solution I found is AggregateInstanceExtraSpecsFilter, this can
coordinate working with Host Aggregates metadata and flavor metadata.
We try to create Host Aggregates like:
az1-pool1 with hosts compute-node-1, and metadata {ceph_pool: pool1};
az1-pool2 with hosts compute-node-2, and metadata {ceph_pool: pool2};
and create flavors like:
flavor1-pool1 with metadata {ceph_pool: pool1};
flavor2-pool1 with metadata {ceph_pool: pool1};
flavor1-pool2 with metadata {ceph_pool: pool2};
flavor2-pool2 with metadata {ceph_pool: pool2};

But this may introduce a new issue about the create_instance. Which flavor
should be used? The business/application layer seems need to add it's own
flavor scheduler.

So here finally, I want to ask, if there is a best practice about using
multiple ceph rbd pools in one available zone.

Best regards,

LIU Yulong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161013/2e2ac688/attachment.html>


More information about the OpenStack-operators mailing list