[openstack-dev] [nova][ceilometer] model for ceilo/nova interaction going forward
Eoghan Glynn
eglynn at redhat.com
Thu Nov 22 21:32:26 UTC 2012
> > My read was that the consensus cohered in the follow-up discussion
> > during the nova IRC meeting[1].
>
> Ah, ok - we really should get into the habit of posting summaries of
> discussion like that to the relevant thread.
Yep, you're right, I dropped the ball on that.
> Take the DiskIOPollster stuff as an example:
>
> conn = driver.load_compute_driver()
> disks = conn.get_disks(instance_name)
> for disk in disks:
> stats = conn.block_stats(instance_name, disk)
>
> The Nova code ceilometer is re-using is the code to connect to
> libvirt, code to call conn.lookupByName() and domain.XMLDesc() to
> query libvirt for the XML describing a VM, code to use xpath to
> extract the disk descriptions from the XML and then code to call
> domain.blockStats() to get stats about the disk.
Yep.
> I really don't see much issue with ceilometer just doing those few
> things directly. The only private implementation assumption you'd be
> making about the Nova libvirt driver is that the libvirt VM name is
> the same as the instance name.
Well I was thinking we'd also need ComputeDriver.list_instances()
at least (previously we pulled the instances for the compute node
directly from the nova DB, now we query the nova-api, but neither
solution is really optimal).
But just to confirm I understand the suggestion, by *direct* usage
do you mean simply calling the currently installed nova virt code
but at a level that is unlikely to change? (say by pushing some
internals of the driver implementation into the driver API)
> You'd be right to be concerned that you'd need to implement that
> same code for other hypervisors, but here's the thing - the
> get_disks() and block_stats() methods in the libvirt virt driver
> aren't part of the virt driver abstraction. Other drivers neither
> implement them, nor do we have common data structures for the values
> they'd return. In other words, the abstraction layer with multiple
> implementations doesn't yet exist.
Yeah, looking at the inconsistencies in the level of support for the
ComputeDriver.get_diagnostics() function, I was figuring we'd need to
potentially live with the API used by ceilo not being universally
supported across all hypervisors initially (but it still would be
less limiting than a direct libvirt dependency).
> You know what's weird? The get_disks() and block_stats() methods
> appear unused in Nova.
Yeah, I think pretty much the same logic is re-implemented in
LibvirtDriver.get_diagnostics.get_io_devices() in the libvirt case
at least - not good!
> > But in any case, one concern with nova-compute emitting these data
> > as either notifications or else making direct calls into a ceilo-
> > provided library was that such a loaded service is likely to
> > run into timeliness issues.
>
> If the data is purely for metering, the timeliness issue isn't a
> concern right?
Less of an issue certainly for pure metering (longer period, less
sensitive to irregular cadence).
> > There's been a fair amount of fuzziness around the distinction
> > between instrumentation and monitoring, but this falls into the
> > latter camp for me. Its not so much about the internal dynamics
> > of nova, as the user-visible behavior (e.g. is my instance running
> > hot or am I pushing loads of data through the network, as opposed
> > how long did that API call to update the instance take to service
> > end-to-end or how many connections are in that pool).
>
> Metering vs monitoring/instrumentation, not monitoring vs
> instrumentation :)
I kinda see the split more as metering/monitoring vs instrumentation.
Well, actually more like metering/user-oriented-monitoring vs
cloud-operator-oriented-monitoring/instrumentation.
Currently ceilo is aiming to at least address both metering and
user-oriented-monitoring, and possibly more in time.
> If we were just designing a solution for metering, would we go for
> the notifications option?
Probably, but the scope of ceilo has widened a bit from pure
metering.
Cheers,
Eoghan
More information about the OpenStack-dev
mailing list