[Openstack-operators] [openstack-operators][ceph][nova] How do you handle Nova on Ceph?

Warren Wang warren at wangspeed.com
Wed Oct 12 20:02:55 UTC 2016


If fault domain is a concern, you can always split the cloud up into 3
regions, each having a dedicate Ceph cluster. It isn't necessarily going to
mean more hardware, just logical splits. This is kind of assuming that the
network doesn't share the same fault domain though.

Alternatively, you can split the hardware for the Ceph boxes into multiple
clusters, and use multi backend Cinder to talk to the same set of
hypervisors to use multiple Ceph clusters. We're doing that to migrate from
one Ceph cluster to another. You can even mount a volume from each cluster
into a single instance.

Keep in mind that you don't really want to shrink a Ceph cluster too much.
What's "too big"? You should keep growing so that the fault domains aren't
too small (3 physical rack min), or you guarantee that the entire cluster
stops if you lose network.

Just my 2 cents,
Warren

On Wed, Oct 12, 2016 at 8:35 AM, Adam Kijak <adam.kijak at corp.ovh.com> wrote:

> > _______________________________________
> > From: Abel Lopez <alopgeek at gmail.com>
> > Sent: Monday, October 10, 2016 9:57 PM
> > To: Adam Kijak
> > Cc: openstack-operators
> > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
> How do you handle Nova on Ceph?
> >
> > Have you thought about dedicated pools for cinder/nova and a separate
> pool for glance, and any other uses you might have?
> > You need to setup secrets on kvm, but you can have cinder creating
> volumes from glance images quickly in different pools
>
> We already have separate pool for images, volumes and instances.
> Separate pools doesn't really split the failure domain though.
> Also AFAIK you can't set up multiple pools for instances in nova.conf,
> right?
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20161012/656c5638/attachment.html>


More information about the OpenStack-operators mailing list