<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Hi Dmitry<br>
<br>
After reading CephRDB the impressions were extremely good and even
better than CephFS to ephemeral storage. Are you using qcow2 or
raw type? I prefer qcow2, but in this case we cannot enable the
writing cache in the cluster reducing a bit the performance. I
should test the CephRDB performance of both (qcow2 and raw) before
migrating to production.<br>
<br>
Thanks for sharing your experience.<br>
Miguel.<br>
<br>
<br>
El 20/06/15 22:49, Dmitry Borodaenko escribió:<br>
</div>
<blockquote
cite="mid:CAM0pNLMuQ-=HBeNYyG7wF2JHrEUpgxdig=coctW9yaTPti0Fkg@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>
<div>With Ceph, you'll want to use RBD instead of CephFS, we
had OpenStack live migration working with Ceph RBD for about
a year and a half now, here's a PDF slide deck with some
details:<br>
<a moz-do-not-send="true"
href="https://drive.google.com/open?id=0BxYswyvIiAEZUEp4aWJPYVNjeU0">https://drive.google.com/open?id=0BxYswyvIiAEZUEp4aWJPYVNjeU0</a><br>
<br>
</div>
If you take CephFS and the bottlenecks associated with POSIX
metadata (which you don't need to manage your boot volumes
which are just block devices) out of the way, the need to
partition your storage cluster disappears, a single Ceph
cluster can serve all 40 nodes.<br>
<br>
</div>
It may be tempting to combine compute and storage on the same
nodes, but there's a gotcha associated with that. Ceph OSD
processes may be fairly CPU-heavy at high IOPS loads or when
rebalancing data after an disk dies or a node goes offline,
you'd have to figure out a way to isolated their CPU usage from
that of your workloads. Which is why, for example, Fuel allows
you to combine ceph-osd and compute roles on the same node, but
Fuel documentation discourages you from doing so.<br>
<div>
<div>
<div>
<div>
<div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr">On Wed, Jun 17, 2015 at 2:11 AM Miguel A Diaz
Corchero <<a moz-do-not-send="true"
href="mailto:miguelangel.diaz@externos.ciemat.es">miguelangel.diaz@externos.ciemat.es</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Hi friends.<br>
<br>
I'm evaluating different DFS to increase our infrastructure
from 10 nodes to 40 nodes approximately. One of the
bottleneck is the shared storage installed to enable the
live-migration.<br>
Well, the selected candidate are NFS, Ceph or Lustre (which
is already installed for HPC purpose). <br>
<br>
Creating a brief planning and avoiding network
connectivities:<br>
<br>
<b>a)</b> with NFS and Ceph, I think it is possible but
dividing the whole infrastructure (40 nodes) in smaller
clusters, for instance; 10 nodes with 1 storage each one.
Obviously, the live-migration is only possible between nodes
on the same cluster (or zone)<br>
<br>
<b>b) </b>with Lustre, my idea is to connect all the nodes
(40 nodes) to the same lustre (MDS) and use all the
concurrency advantages of the storage. In this case, the
live migration could be possible among all the nodes.<br>
<br>
I would like to ask you for any idea, comment or experience.
I think the most untested case is b), but has anyone tried
to use Lustre in a similar scenario? Any comment in any case
a) o b) are appreciated.<br>
<br>
Thanks<br>
Miguel.<br>
<br>
<br>
<div>-- <br>
<i><font size="2"><span style="color:#000000"><span
style="font-family:Century Gothic,sans-serif,10">Miguel
Angel Díaz Corchero</span></span></font></i><font
size="2"><br>
<i><b><span style="font-family:Century
Gothic,sans-serif">System Administrator /
Researcher</span></b></i><br>
<i><span style="font-family:Century Gothic,sans-serif">c/
Sola nº 1; 10200 TRUJILLO, SPAIN</span></i><br>
<i><span style="font-family:Century Gothic,sans-serif">Tel:
+34 927 65 93 17 Fax: +34 927 32 32 37</span></i>
<p><span><a moz-do-not-send="true"
href="http://www.ceta-ciemat.es/" target="_blank"><img
src="cid:part3.06000109.08040407@externos.ciemat.es" alt="CETA-Ciemat
logo" height="76" width="383" border="0"></a></span></p>
</font></div>
<pre><i><font color="black" face="Times New Roman, Times, serif" size="-1">
----------------------------
Confidencialidad:
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener información privilegiada o
confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilización, divulgación y/o copia sin autorización está
prohibida en virtud de la legislación vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique
inmediatamente respondiendo al mensaje y proceda a su destrucción.
Disclaimer:
This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received
this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and
may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately.
----------------------------
</font></i>
</pre>
</div>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a moz-do-not-send="true"
href="mailto:OpenStack-operators@lists.openstack.org"
target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a moz-do-not-send="true"
href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators"
rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>