[openstack-dev] [ceilometer] resource_metadata and metaquery
Julien Danjou
julien at danjou.info
Thu Jan 24 12:57:44 UTC 2013
On Thu, Jan 24 2013, Sandy Walsh wrote:
> I'll throw together a proposal. I think it can be done with a set of two
> extensions:
>
> 1. the parser part of the event consumer (please excuse my terminology
> abuse. "agent" perhaps?)
> 2. the database portion (which would need to deal with migration, CRUD
> and advanced query). The hard part.
>
> We would have to agree on a common format for passing these data
> structures around. Probably just some base attributes + "extra" bits. It
> would likely look like the metadata/dimension structure, but
> under-the-hood could be handled efficiently. This structure would also
> have a tag that would identify the "handler" needed to deal with it. A
> datatype name, if you will.
>
> UI, API, aggregation, etc would all work with these generic data
> structures.
>
> Honestly I don't think there would be a whole lot of them. Likely, just
> one datatype per system (cinder, nova, quantum, etc).
>
> The aggregation system (aka multi-publisher) could listen for data types
> it's interested in for roll-ups.
>
> The potential downside is that we could end up with one "monster
> datatype" which is a most-common-denominator of all the important
> attributes across all systems (cinder, nova, quantum, etc). I think
> we're going to end up with one of these anyway once we get into the
> multi-publisher/aggregation layers. eg: "Instance" or "Tenant"
>
> I think I should do up a little video showing the type of db data
> structures we've found useful in StackTach. They're small, but
> non-trivial. It should really illustrate what multi-publisher is going
> to need.
Reading this, (even if I admin I don't really get everything you mean)
really makes me think you've a clear understanding of what you want for
your use case, and what you'll need is building a custom collector for
your own usage, using multi-publisher to achieve that with a good
integration with Ceilometer.
But if the current multi-publisher proposal and implementation isn't
fulfilling your needs, we'd be glad to hear what we miss.
>> But I think that you may want is to implement an dynamic SQL engine
>> backend creating and indexing columns you want to request for. That's a
>> solution, but we're trying to be generic with the default sqlalchemy
>> backend.
>
> Wouldn't the end effect be the same (without the large impact of an
> index creation hit on first request)? How would we police the growth of
> db indices?
I don't see how that problem is relative to Ceilometer itself. That
sounds like it depends on the amount of data you store anyway?
> Yep, I agree EAV is bad, that's why I'm proposing a largely denormalized
> table for the raw/underlying data types. Something easily queried on,
> but extensible.
We only have one data type as it is in Ceilometer (meters).
> Thanks. We (RAX) are likely to be using mongodb as our backend storage
> system as well. Perhaps there's merit in having a discussion about
> sticking with one or the other (sql vs no-sql)?
What we're talking about here is already implemented in MongoDB as far
as I know, using classic map reduce. So things already works, but may
not be optimal for some/your uses cases or query types.
> Having one datatype per collection would certainly make things easier on
> #2 mentioned above (especially around the migration side).
>
> Thinking out loud: If we push the storage into the data type driver we
> could likely have different storage systems per data type? (not sure if
> that's a good thing or not)
Yes, but again, we don't have different data type, we only have one.
I understand that what you propose is to split counters in different
collections or tables based on something like counter's name.
Doing that is likely to work and be a good idea until you want to
retrieve something like e.g. the list of counter name available for a
tenant, and you won't be able to do it or will do it in
(number_of_collections + 1) queries, which looks terribly inefficient,
or build another table storing this data twice (which can led to a maybe
dangerous path to inconsitency).
I guess anything you'll do is going to be trade-off between our generic
approach and your optimized view of your use case.
--
Julien Danjou
// Free Software hacker & freelance
// http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130124/e26222df/attachment.pgp>
More information about the OpenStack-dev
mailing list