[openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects
Dan Smith
dms at danplanet.com
Thu Sep 3 20:02:02 UTC 2015
>>> we store everything as primitives: floats, time, integer, etc... since
>>> we need to query on attributes. it seems like versionedobjects might not
>>> be useful to our db configuration currently.
>> I don't think the former determines the latter -- we have lots of things
>> stored as rows of column primitives and query them out as objects, but
>> then you're not storing the object and version (unless you do it
>> separately) So, if it doesn't buy you anything, then there's no reason
>> to use it.
> sorry, i misunderstood this. i thought you were saying ovo may not fit
> into Ceilometer.
Nope, what I meant was: there's no reason to use the technique of
storing serialized objects as blobs in the database if you don't want to
store things like that.
> i guess to give it more of a real context for us to understand,
> regarding the database layer, if we have an events model which consists of:
>
> - id: uuid
> - event_type: string
> - generated: timestamp
> - raw: dictionary value (not meant for querying, just for auditing
> purposes)
> - traits: [list of tuples (key, value, type)]
>
> given this model, each of our backend drivers take this data and using
> it's connection to db, stores data accordingly:
> - in mongodb, the attributes are all stored in documents similar to
> json, raw attr is stored as json
Right, so you could store the serialized version of the object in mongo
like this very easily. When you go to pull data out of the database
later, you have a strict format, and a version tied to it so that you
know exactly how it was stored. If you have storage drivers that handle
taking the generic thing and turning it into something appropriate for a
given store, then it's entirely possible that you are best suited to be
tolerant of old data there.
In Nova, we treat the object schema as the interface the rest of the
code uses and expects. Tolerance of the actual persistence schema moving
underneath and over time is hidden in this layer so that things above
don't have to know about it.
> - in sql, the data is mapped to an Event table, traits are mapped to
> different traits tables depending on type, raw attribute stored as a
> string.
Yep, so when storing in a SQL database, you'd (presumably) not store the
serialized blobs, and rather pick the object apart to store it as a row
(like most of the things in nova are stored).
> considering everything is stored differently depending on db, how does
> ovo work? is it normalising it into a specific format pre-storage? how
> does different data will different schemas co-exists on the same db?
This is completely up to your implementation. You could end up with a
top-level object like Event that doesn't implement .save(), and then
subclasses like SQLEvent and MongoEvent that do. All the structure could
be defined at the top, but the implementations of how to store/retrieve
them are separate.
The mongo one might be very simple because it can just use the object
infrastructure to get the serialized blob and store it. The SQL one
would turn the object's fields into an INSERT statement (or a SQLAlchemy
thing).
> - is there a some version tag applied to each item and a version schema
> table created somewhere?
The object defines the schema as a list of tightly typed fields, a bunch
of methods, and a version. In this purely DB-specific case, all it does
is provide you a facade with which to hide things like storing to a
different version or format of schema. For projects that send things
over RPC and then dump them in the database, it's super convenient that
this is all one thing.
> - do we need to migrate the db to some handle different set of
> attributes and what happens for nosql dbs?
No, Nova made no schema changes as a result of moving to objects.
> also, from api querying pov, if i want to query a db, how do you
> query/filter across different versions?
> - does ovo tell the api what versions exists in db and then you can
> filter across any attribute from any schema version?
Nope, o.vo doesn't do any of this for you magically. It merely sets up a
place for you to do that work. In nova, we use them for RPC and DB
storage, which means if we have an old node that receives a new object
over RPC (or the opposite) we have rules that define how we handle that.
Thus, we can apply the same rules to reading the DB, where some objects
might be older or newer.
> apologies on not understanding how it all works or if the above has
> nothing to do with ovo (i wasn't joking about the 'explain it to me like
> i'm 5' request :-) ) ... i think part of the wariness is that the code
> seemingly does nothing now (or the logic is extremely hidden) but if we
> merge these x hundred/thousand lines of code, it will do something later
> if something changes.
It really isn't magic and really doesn't do a huge amount of work for
you. It's a pattern as much as anything, and most of the benefit comes
from the serialization and version handling of things over RPC.
Part of the reason why my previous responses are so vague is that I
really don't care if you use o.vo or not. What I do care about is that
critical openstack projects move (quickly) to supporting rolling
upgrades, the likes of what nova supports now and the goals we're trying
to achieve. If the pattern that nova defined and spun out into the
library helps, then that's good for repeatability. However, the model
nova chose clearly mostly applies to projects spawned from nova or those
that were more or less desgined in its image.
I think the goal should be "Monitoring of your cloud doesn't ever have
to be turned off to upgrade it." Presumably you never want to leave your
cloud unmonitored while you take a big upgrade. How that goal is
realized really doesn't matter to me at all, as long we we get there.
--Dan
More information about the OpenStack-dev
mailing list