<html><head><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">A couple of things I've been wanting to try, either to have glance use the file backend, and export that out to the computes (assuming that if you're going to have one place with lots of storage, your image backend would be it), or, using ceph to export a shared FS for the computes.<div><br></div><div><br><div><div>On Aug 17, 2013, at 9:55 AM, Mitch Anderson <<a href="mailto:mitch@metauser.net">mitch@metauser.net</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr">I would like to think of the compute as 'failure expectant' but the instance store is a huge hold back. I have a limited budget and would like to get the best environment as possible on it. With that, consolidating storage is a huge priority. Running the glance from the Ceph cluster would definately be a plus. However, needing shared storage for /var/lib/nova/instances as well as the ceph cluster means I need an HA NFS setup as well as the Ceph storage nodes. I think the only thing I will be able to get passed off would be one or the other. Which I assume means I need this: <a href="https://blueprints.launchpad.net/nova/+spec/bring-rbd-support-libvirt-images-type">https://blueprints.launchpad.net/nova/+spec/bring-rbd-support-libvirt-images-type</a> to get approved and implemented for havana... what shared ephemeral instance stores is everyone using?</div>
<div class="gmail_extra"><br><br><div class="gmail_quote">On Sat, Aug 17, 2013 at 10:33 AM, Abel Lopez <span dir="ltr"><<a href="mailto:alopgeek@gmail.com" target="_blank">alopgeek@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I believe the general consensus for production systems is to not run ceph on compute nodes. Compute nodes should be solely used as instance resources. Plus, compute nodes should be 'failure expectant', you should be able to just pull one out and replace it with a blank box. Adding storage cluster to the mix just complicates maintenance planning, etc. Plus, rule-of-thumb for ceph is 1GHz per OSD, which can be significant depending on the number of disks you're planning on.<br>
<br>
Since you're starting from scratch, I would recommend having your glance utilize the ceph cluster you're planning. You get added benefits by using qcow2 disk images in ceph, as new instances are launched as COW clones.<br>
<br>
As for 'minimal storage' on your compute nodes, I assume that you're intending to have a shared '/var/lib/nova/instances/' directory, as each vm will need a disk file. This has the added benefit of being a prerequisite for vm migration.<br>
<br>
Hope that helps.<br>
<div><div class="h5"><br>
On Aug 16, 2013, at 11:05 PM, Mitch Anderson <<a href="mailto:mitch@metauser.net">mitch@metauser.net</a>> wrote:<br>
<br>
> aHi all,<br>
><br>
> I've been looking around for example architectures for types of sytems and numbers for an HA setup of Openstack Grizzly (Probably won't go live until after havana is released).<br>
><br>
> I've done a Simple Non-HA setup with Mirantis' Fuel. Which has worked out well. And they're documented Production HA setup is with 3 Controllers and N compute nodes...<br>
><br>
> If I were to use Ceph for storage I would need a minimum of atleast 3 nodes. I was looking to make my compute nodes have minimal disk space so only my Controllers would have storage (for Glance, DB's, etc..) and the Ceph storage nodes would have the rest. Is this solution preferred? Or, do I run Ceph on the compute nodes? If so, what size nodes should they be then? I'd like to run 40-60 VM's per compute node of varying sizes and needs.<br>
><br>
> Any pointers would be much appreciated!<br>
><br>
> -Mitch Anderson<br>
</div></div>> _______________________________________________<br>
> OpenStack-operators mailing list<br>
> <a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br>
</blockquote></div><br></div>
</blockquote></div><br></div></body></html>