[Openstack-operators] [Scale][Performance] <nodes_with_someting> / compute_nodes ratio experience

Dina Belova dbelova at mirantis.com
Thu Nov 19 09:27:23 UTC 2015


sure, I did not mean that we should collect this information without
understanding the workloads happening on the cloud, but still this is very
interesting information to gather.


On Thu, Nov 19, 2015 at 1:52 AM, Dan Smith <dms at danplanet.com> wrote:

> > As providing OpenStack community with understandable recommendations
> > and instructions on performant OpenStack cloud deployments is part
> > of Performance Team mission, I'm kindly asking you to share your
> > experience on safe cloud deployment ratio between various types of
> > nodes you're having right now and the possible issues you observed
> > (as an example: discussed GoDaddy's cloud is having 3 conductor boxes
> > vs 250 computes in the cell, and there was an opinion that it's
> > simply not enough and that's it).
> That was my opinion, and it was based on an apparently incorrect
> assumption that they had a lot of things coming and going on their
> cloud. I think they've demonstrated at this point that (other issues
> aside) three is enough for them, given their environment, workload, and
> configuration.
> The problem with coming up with any sort of metric that will apply to
> everyone is that it's highly variable. If you have 250 compute nodes and
> never create or destroy any instances, you'll be able to get away with
> *many* fewer conductors than if you have a very active cloud. Similarly,
> during a live upgrade (or following any upgrade where we do some online
> migration of data), your conductor load will be higher than normal. Of
> course, 4-core and 96-core conductor nodes aren't equal either.
> So, by all means, we should gather information on what people are doing
> successfully, but keep in mind that it depends *a lot* on what sort of
> workloads the cloud is supporting.
> --Dan


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151119/f60d41f1/attachment.html>

More information about the OpenStack-operators mailing list