[openstack-dev] RFC: Synchronizing hypervisor <-> nova state with event notifications

Day, Phil philip.day at hp.com
Fri Jan 18 13:04:59 UTC 2013


Wouldn't it be simpler in the first instance to just move the current polling into the nova-periodic service proposed by Michael (https://review.openstack.org/#/c/19539/ ) and shorten the polling interval ?

I know it's not as elegant but if this is a big problem maybe a quick fix while the callback / threading stuff gets sorted out.   The original bug wasn't directly related to the amount of resources consumed by the 
Polling, just the impact it had due to the threading model.

Phil


-----Original Message-----
From: Mark McLoughlin [mailto:markmc at redhat.com] 
Sent: 18 January 2013 07:09
To: Daniel P. Berrange
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] RFC: Synchronizing hypervisor <-> nova state with event notifications

On Tue, 2013-01-15 at 16:41 +0000, Daniel P. Berrange wrote:
> On Tue, Jan 15, 2013 at 04:18:55PM +0000, Mark McLoughlin wrote:
> > On Tue, 2013-01-15 at 13:58 +0000, Daniel P. Berrange wrote:
> > > On Tue, Jan 15, 2013 at 01:45:53PM +0000, Mark McLoughlin wrote:
> > > > On Tue, 2013-01-15 at 13:24 +0000, Daniel P. Berrange wrote:
> > > > > On Tue, Jan 15, 2013 at 12:04:31PM +0000, Mark McLoughlin wrote:
> > > > > > On Tue, 2013-01-15 at 10:52 +0000, Daniel P. Berrange wrote:
> > > > > > > The question is how to structure the processing of events. 
> > > > > > > I don't think that the hypervisor drivers should be directly processing events.
> > > > > > > Instead I believe they need to pass along the event 
> > > > > > > notifications to the manager.py class. So my current 
> > > > > > > thought is to introduce a new API to nova.virt.api
> > > > > > > 
> > > > > > >   register_event_notifier(self, callback)
> > > > > > > 
> > > > > > > and have nova.compute.manager provide a callback impl to 
> > > > > > > receive the events. Before I start coding on this, I want 
> > > > > > > some kind of confirmation that this is an acceptable 
> > > > > > > direction to go in, since there is no current callback 
> > > > > > > based interactions between nova.compute.manager & 
> > > > > > > nova.virt.api
> > > > > > 
> > > > > > That all sounds good to me, but can you go into more details 
> > > > > > about how the event dispatching would work?
> > > > > > 
> > > > > > We don't have a mainloop to integrate this with, so 
> > > > > > presumably you're talking about spawning a greenthread which 
> > > > > > would poll for events (using
> > > > > > virEventRunDefaultImpl()?) and then invoke the callbacks? 
> > > > > > Does this greenthread need its own libvirt connection?
> > > > > > 
> > > > > > Does this new thread introduce any new concurrency issues? I 
> > > > > > guess not since processing each RPC message and running 
> > > > > > periodic timers all happen in different threads, so this wouldn't be much different.
> > > > > 
> > > > > Yes, thanks to eventlet awfulness, we'd need to have a 
> > > > > greenthread running the libvirt event loop.  The event 
> > > > > notification callbacks from libvirt would thus obviously 
> > > > > execute in the greenthread. What I'm not sure about is how to 
> > > > > switch control back to the main eventlet thread before we call out into the manager.py's callback.
> > > > >
> > > > > I imagine a queue of events in the nova.virt.libvirt.driver 
> > > > > object which is fed back the libvirt greenthread callbacks. 
> > > > > Something in the main eventlet thread would then have to 
> > > > > process the queue to dispatch to the manager. With a regular 
> > > > > mainloop the way you'd do this is to schedule a timer to fire 
> > > > > after zero seconds. I'm not familiar enough with eventlet yet to know how you'd do this.
> > > > 
> > > > There isn't really a "main eventlet thread" - the main 
> > > > greenthread is just sitting there waiting for the other greenthreads to finish.
> > > > 
> > > > The greenthread for consuming RPC messages spawns off a 
> > > > greenthread for each message and this is the normal entry point into manager.py.
> > > > 
> > > > That's why I'm thinking just invoking the callback directly from 
> > > > your greenthread would work.
> > > > 
> > > > (I'm explicitly saying greenthread over and over because it's so 
> > > > easy to forget these aren't native threads :)
> > > 
> > > Of course when I wrote 'greenthread' in my description above, I of 
> > > course mean native thread. You can't invoke libvirt API calls 
> > > directly from greenthreads because native code blocks the whole 
> > > interpretor unless you use native threads.
> > >
> > > So the libvirt callback will be running in a native thread and 
> > > needs to pass control back to a greenthread for processing. I 
> > > guess I can have a dedicated greenthread that processes a queue of 
> > > events from the native thread.
> > 
> > Hmm, not sure I buy that.
> > 
> > If this was a single-threaded app with a mainloop, I think I'd be 
> > happy to add libvirt's "check for events" timer to the main thread. 
> > Do you really see the timer blocking for that long?
> 
> To receive events from libvirt you need to provide an event loop impl 
> to libvirt, in the form of running this code:
> 
>   libvirt.virEventRegisterDefaultImpl()
>   while True:
>      libvirt.virEventRunDefaultImpl()
> 
> to allow libvirt to process incoming events from libvirtd. While those 
> API calls above are invoked from a greenthread, since we use a native 
> threadpool for all libvirt APIs, this means they're going to actually 
> switch to a native to run the C code. This implies that the callback 
> from libvirt is going to be invoked in the context of a native thread.
> We thus need to get out of native thread context & into a greenthread 
> context before we can pass the event back up to the 
> nova.compute.manager class' callback.

FWIW, that makes sense to me - I'd missed the fact that we use a native threadpool for libvirt API calls.

Cheers,
Mark.


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list