[openstack-dev] [Ceilometer] storage driver testing

Boris Pavlovic boris at pavlovic.me
Fri Nov 29 17:53:11 UTC 2013


Sandy,

Seems like we should think about how we can combine our approaches.
Rally makes load using python clients (e.g. ceilometer python client) using
different amount of users/tenatns/active_users/... So it address #2 point.

About profiling part.
Actually we attempted to make profiling system based on tomograph + zipkin.
But after we finished work around it we got complex and unstable solution.
So we take a look at ceilometer and seems like it is the perfect solution
for storing profiling data. So we are almost done with this part. Only
thing that we need now is virtualization system, that could be ported from
zipkin.


So, it will be nice if you will be able to join our efforts. And help with
testing ceilometer & build OpenStack profiling system.


Best regards,
Boris Pavlovic



On Fri, Nov 29, 2013 at 9:05 PM, Sandy Walsh <sandy.walsh at rackspace.com>wrote:

>
>
> On 11/29/2013 11:32 AM, Nadya Privalova wrote:
> > Hello Sandy,
> >
> > I'm very interested in performance results for Ceilometer. Now we have
> > successfully installed Ceilometer in the HA-lab with 200 computes and 3
> > controllers. Now it works pretty good with MySQL. Our next steps are:
> >
> > 1. Configure alarms
> > 2. Try to use Rally for OpenStack performance with MySQL and MongoDB
> > (https://wiki.openstack.org/wiki/Rally)
> >
> > We are open to any suggestions.
>
> Awesome, as a group we really need to start a similar effort as the
> storage driver tests for ceilometer in general.
>
> I assume you're just pulling Samples via the agent? We're really just
> focused on event storage and retrieval.
>
> There seems to be three levels of load testing required:
> 1. testing through the collectors (either sample or event collection)
> 2. testing load on the CM api
> 3. testing the storage drivers.
>
> Sounds like you're addressing #1, we're addressing #3 and Tempest
> integration tests will be handling #2.
>
> I should also add that we've instrumented the db and ceilometer hosts
> using Diamond to statsd/graphite for tracking load on the hosts while
> the tests are underway. This will help with determining how many
> collectors we need, where the bottlenecks are coming from, etc.
>
> It might be nice to standardize on that so we can compare results?
>
> -S
>
> >
> > Thanks,
> > Nadya
> >
> >
> >
> > On Wed, Nov 27, 2013 at 9:42 PM, Sandy Walsh <sandy.walsh at rackspace.com
> > <mailto:sandy.walsh at rackspace.com>> wrote:
> >
> >     Hey!
> >
> >     We've ballparked that we need to store a million events per day. To
> >     that end, we're flip-flopping between sql and no-sql solutions,
> >     hybrid solutions that include elastic search and other schemes.
> >     Seems every road we go down has some limitations. So, we've started
> >     working on test suite for load testing the ceilometer storage
> >     drivers. The intent is to have a common place to record our findings
> >     and compare with the efforts of others.
> >
> >     There's an etherpad where we're tracking our results [1] and a test
> >     suite that we're building out [2]. The test suite works against a
> >     fork of ceilometer where we can keep our experimental storage driver
> >     tweaks [3].
> >
> >     The test suite hits the storage drivers directly, bypassing the api,
> >     but still uses the ceilometer models. We've added support for
> >     dumping the results to statsd/graphite for charting of performance
> >     results in real-time.
> >
> >     If you're interested in large scale deployments of ceilometer, we
> >     would welcome any assistance.
> >
> >     Thanks!
> >     -Sandy
> >
> >     [1]
> https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
> >     [2] https://github.com/rackerlabs/ceilometer-load-tests
> >     [3] https://github.com/rackerlabs/instrumented-ceilometer
> >
> >     _______________________________________________
> >     OpenStack-dev mailing list
> >     OpenStack-dev at lists.openstack.org
> >     <mailto:OpenStack-dev at lists.openstack.org>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131129/5d662ad2/attachment.html>


More information about the OpenStack-dev mailing list