[Openstack-operators] Reference architecture for medium sized environment

Nick Maslov azpekt at gmail.com
Mon Aug 19 18:06:58 UTC 2013


Hi Mitch,

Yes, that is our approach. 

Cheers,
NM



On 08/19, Mitch Anderson wrote:
> When you say RBD via iSCSI for compute nodes are you talking about
> something like this:
> http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices
> 
> which is an interesting thought.
> 
> 
> On Mon, Aug 19, 2013 at 3:50 AM, Nick Maslov <azpekt at gmail.com> wrote:
> 
> > Hi Mitch
> >
> > We are using RBD, via iSCSI for compute nodes (they are booting from it as
> > part of iPXE/DHCP process), then we are using Ceph as backend for both
> > images and VM`s.
> >
> > Simple and it works.
> >
> > Cheers,
> > NM
> >
> >
> > On 08/17, Mitch Anderson wrote:
> > > aHi all,
> > >
> > > I've been looking around for example architectures for types of sytems
> > and
> > > numbers for an HA setup of Openstack Grizzly (Probably won't go live
> > until
> > > after havana is released).
> > >
> > > I've done a Simple Non-HA setup with Mirantis' Fuel.  Which has worked
> > out
> > > well.  And they're documented Production HA setup is with 3 Controllers
> > and
> > > N compute nodes...
> > >
> > > If I were to use Ceph for storage I would need a minimum of atleast 3
> > > nodes.  I was looking to make my compute nodes have minimal disk space so
> > > only my Controllers would have storage (for Glance, DB's, etc..) and the
> > > Ceph storage nodes would have the rest.  Is this solution preferred?  Or,
> > > do I run Ceph on the compute nodes?  If so, what size nodes should they
> > be
> > > then?  I'd like to run 40-60 VM's per compute node of varying sizes and
> > > needs.
> > >
> > > Any pointers would be much appreciated!
> > >
> > > -Mitch Anderson
> >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >



More information about the OpenStack-operators mailing list