[openstack-dev] [glance] [ceilometer] Periodic Auditing In Glance

Doug Hellmann doug.hellmann at dreamhost.com
Wed Aug 21 14:55:05 UTC 2013


On Mon, Aug 19, 2013 at 3:06 PM, Sandy Walsh <sandy.walsh at rackspace.com>wrote:

>
>
> On 08/16/2013 04:58 PM, Doug Hellmann wrote:
> > The notification messages don't translate 1:1 to database records. Even
> > if the notification payload includes multiple resources, we will store
> > those as multiple individual records so we can query against them. So it
> > seems like sending individual notifications would let us distribute the
> > load of processing the notifications across several collector instances,
> > and won't have any effect on the data storage requirements.
>
>
> Well, they would. Each .exists would result in a Event record being
> stored with the underlying raw json (pretty big) at the very least.
>

Ah, you're right, I forgot about the new event table. I was just thinking
of the samples.

Doug


>
> Like Alex said, if each customer has a daily week-long rolling backup
> over 100k instances that's 700k .exists records.
>
> We have some ideas we're kicking around internally about alternative
> approaches, but right now I think we have to design for 1 glance .exists
> per image (worst case) or 1 glance event per tenant (better case) and
> hope that deploying per-cell will help spread the load ... but it'll
> suck for making aggregated reports per region.
>
> Phil, like Doug said, I don't think switching from per-instance to
> per-tenant or anything else will really affect the end result. The
> event->meter mapping will have to break it down anyway.
>
>
> -S
>
> >
> > Doug
> >
> >
> > On Thu, Aug 15, 2013 at 11:58 AM, Alex Meade <alex.meade at rackspace.com
> > <mailto:alex.meade at rackspace.com>> wrote:
> >
> >     I don't know any actual numbers but I would have the concern that
> >     images tend to stick around longer than instances. For example, if
> >     someone takes daily snapshots of their server and keeps them around
> >     for a long time, the number of exists events would go up and up.
> >
> >     Just a thought, could be a valid avenue of concern.
> >
> >     -Alex
> >
> >     -----Original Message-----
> >     From: "Doug Hellmann" <doug.hellmann at dreamhost.com
> >     <mailto:doug.hellmann at dreamhost.com>>
> >     Sent: Thursday, August 15, 2013 11:17am
> >     To: "OpenStack Development Mailing List"
> >     <openstack-dev at lists.openstack.org
> >     <mailto:openstack-dev at lists.openstack.org>>
> >     Subject: Re: [openstack-dev] [glance] [ceilometer] Periodic Auditing
> >     In Glance
> >
> >     _______________________________________________
> >     OpenStack-dev mailing list
> >     OpenStack-dev at lists.openstack.org
> >     <mailto:OpenStack-dev at lists.openstack.org>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     Nova generates a single exists event for each instance, and that
> doesn't
> >     cause a lot of trouble as far as I've been able to see.
> >
> >     What is the relative number of images compared to instances in a
> >     "typical"
> >     cloud?
> >
> >     Doug
> >
> >
> >     On Tue, Aug 13, 2013 at 7:20 PM, Neal, Phil <phil.neal at hp.com
> >     <mailto:phil.neal at hp.com>> wrote:
> >
> >     > I'm a little concerned that a batch payload won't align with
> "exists"
> >     > events generated from other services. To my recollection, Cinder,
> >     Trove and
> >     > Neutron all emit exists events on a per-instance basis....a
> >     consumer would
> >     > have to figure out a way to handle/unpack these separately if they
> >     needed a
> >     > granular feed. Not the end of the world, I suppose, but a bit
> >     inconsistent.
> >     >
> >     > And a minor quibble: batching would also make it a much bigger
> >     issue if a
> >     > consumer missed a notification....though I guess you could
> >     counteract that
> >     > by increasing the frequency (but wouldn't that defeat the purpose?)
> >     >
> >     > >
> >     > >
> >     > >
> >     > > On 08/13/2013 04:35 PM, Andrew Melton wrote:
> >     > > >> I'm just concerned with the type of notification you'd send.
> >     It has to
> >     > > >> be enough fine grained so we don't lose too much information.
> >     > > >
> >     > > > It's a tough situation, sending out an image.exists for each
> >     image with
> >     > > > the same payload as say image.upload would likely create TONS
> of
> >     > traffic.
> >     > > > Personally, I'm thinking about a batch payload, with a bare
> >     minimum of
> >     > the
> >     > > > following values:
> >     > > >
> >     > > > 'payload': [{'id': 'uuid1', 'owner': 'tenant1', 'created_at':
> >     > > > 'some_date', 'size': 100000000},
> >     > > >                {'id': 'uuid2', 'owner': 'tenant2',
> 'created_at':
> >     > > > 'some_date', 'deleted_at': 'some_other_date', 'size':
> 200000000}]
> >     > > >
> >     > > > That way the audit job/task could be configured to emit in
> batches
> >     > which
> >     > > > a deployer could tweak the settings so as to not emit too many
> >     > messages.
> >     > > > I definitely welcome other ideas as well.
> >     > >
> >     > > Would it be better to group by tenant vs. image?
> >     > >
> >     > > One .exists per tenant that contains all the images owned by
> >     that tenant?
> >     > >
> >     > > -S
> >     > >
> >     > >
> >     > > > Thanks,
> >     > > > Andrew Melton
> >     > > >
> >     > > >
> >     > > > On Tue, Aug 13, 2013 at 4:27 AM, Julien Danjou
> >     <julien at danjou.info <mailto:julien at danjou.info>
> >     > > > <mailto:julien at danjou.info <mailto:julien at danjou.info>>>
> wrote:
> >     > > >
> >     > > >     On Mon, Aug 12 2013, Andrew Melton wrote:
> >     > > >
> >     > > >     > So, my question to the Ceilometer community is this,
> >     does this
> >     > > >     sound like
> >     > > >     > something Ceilometer would find value in and use? If so,
> >     would
> >     > this be
> >     > > >     > something
> >     > > >     > we would want most deployers turning on?
> >     > > >
> >     > > >     Yes. I think we would definitely be happy to have the
> >     ability to
> >     > drop
> >     > > >     our pollster at some time.
> >     > > >     I'm just concerned with the type of notification you'd
> >     send. It
> >     > has to
> >     > > >     be enough fine grained so we don't lose too much
> information.
> >     > > >
> >     > > >     --
> >     > > >     Julien Danjou
> >     > > >     // Free Software hacker / freelance consultant
> >     > > >     // http://julien.danjou.info
> >     > > >
> >     > > >
> >     > > >
> >     > > >
> >     > > > _______________________________________________
> >     > > > OpenStack-dev mailing list
> >     > > > OpenStack-dev at lists.openstack.org
> >     <mailto:OpenStack-dev at lists.openstack.org>
> >     > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     > > >
> >     > >
> >     > > _______________________________________________
> >     > > OpenStack-dev mailing list
> >     > > OpenStack-dev at lists.openstack.org
> >     <mailto:OpenStack-dev at lists.openstack.org>
> >     > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >
> >     > _______________________________________________
> >     > OpenStack-dev mailing list
> >     > OpenStack-dev at lists.openstack.org
> >     <mailto:OpenStack-dev at lists.openstack.org>
> >     > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >     >
> >
> >
> >
> >     _______________________________________________
> >     OpenStack-dev mailing list
> >     OpenStack-dev at lists.openstack.org
> >     <mailto:OpenStack-dev at lists.openstack.org>
> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130821/245c9eeb/attachment.html>


More information about the OpenStack-dev mailing list