[Openstack-operators] Lets talk capacity monitoring

matt matt at nycresistor.com
Thu Jan 15 17:51:37 UTC 2015


I know we've been working on that on our commercial product side at big
switch with an analyzer... the issue i think you are going to run into is
getting insight into network upstream info from your top of racks and spine
switches.

Setting up a uniform access to ovs stats in the API or in an external API (
probably preferable ) is not a bad idea.

Way I see it, you might want to consider an external ( not a full project
in OpenStack ) api for aggregating ovs stats ( use the message bus to cull
that data ). whether you want to try to make use of ceilometer or monaas is
really up to you, I'd recommend only using ceilometer if you are using
zones.  Then making a display panel pluggable to horizon is fairly straight
forward with d3.js

Off hand I don't know of a specific project targetting this, but it could
shoehorn into projects like ceilometer or monass.  Also future tap as a
service work that's starting to occur in juno now may be super helpful as
well.

This looks like it WILL exist... how it exists and what it ties into is
probably waiting on a bit more definition and execution commitment from
other projects.  Unless you happen to be using zones and want to augment
ceilometer.

-Matt

On Thu, Jan 15, 2015 at 9:25 AM, Mathieu Gagné <mgagne at iweb.com> wrote:

> On 2015-01-15 11:43 AM, Jesse Keating wrote:
>
>> We have a need to better manage the various openstack capacities across
>> our numerous clouds. We want to be able to detect when capacity of one
>> system or another is approaching the point where it would be a good idea
>> to arrange to increase that capacity. Be it volume space, VCPU
>> capability, object storage space, etc...
>>
>> What systems are you folks using to monitor and react to such things?
>>
>>
> Thanks for bringing up the subject Jesse.
>
> I believe you are not the only one facing this challenge because I am too.
>
> I added the subject to the midcycle ops meetup (Capacity
> planning/monitoring) which I hope to be able to attend:
> https://etherpad.openstack.org/p/PHL-ops-meetup
>
>
> We are using host aggregates and have a complex combination of them.
> (imaging a venn diagram)
>
> What we do is retrieving all:
> - hypervisor stats
> - host aggregates
>
> From there, we compute resource usage (vcpus, ram, disk) in any given host
> aggregate.
>
> This part is very challenging as we have to partially reimplement
> nova-scheduler logic to determine if a given hypervisor has different
> resource allocation ratios based on host aggregate attributes.
>
> The result in a table with resource usage percentage (and absolute
> numbers) for each host aggregates (and combinations).
>
> Unfortunately, I can't share yet this first tool as my coworker very
> tightly integrated it to our internal monitoring tool and wouldn't work
> outside it. No promise but I'll try to find time to extract it and share it
> with you guys.
>
>
> We also coded a very primitive tool which takes a flavor name and compute
> available "slots" on each hypervisors (regardless of host aggregate
> memberships):
>
> https://gist.github.com/mgagne/bc54c3434a119246a88d
>
> This tool is not actively used in our monitoring due to mentioned
> limitation as we would again have to partially reimplement nova-scheduler
> logic to determine if a given flavor can (or not) be spawn on a given
> hypervisor and filter it out from the output if it can't accept the flavor.
> Furthermore, it does not take into account resource allocation ratios based
> on host aggregates.
>
> Hopefully, other people will join in and share their tools so we can all
> improve our OpenStack operations experience.
>
> --
> Mathieu
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150115/fe918d54/attachment.html>


More information about the OpenStack-operators mailing list