[Openstack] RFC - dynamically loading virt drivers

Doug Hellmann doug.hellmann at dreamhost.com
Fri May 18 20:49:31 UTC 2012


On Fri, May 18, 2012 at 3:08 PM, Sean Dague <sdague at linux.vnet.ibm.com>wrote:

> On 05/17/2012 06:38 PM, Vishvananda Ishaya wrote:
> >
>
>> So we already have plugabillity by just specifying a different
>> compute_driver config option.  I don't like that we defer another level in
>> compute and call get_connection.  IMO the best cleanup would be to remove
>> the get_connection altogether and just construct the driver directly based
>> on compute_driver.
>>
>> The main issue with changing this is breaking existing installs.
>>
>> So I guess this would be my strategy:
>>
>> a) remove get_connection from the drivers (and just have it construct the
>> 'connection' class directly)
>> b) modify the global get_connection to construct the drivers for
>> backwards compatibilty
>> c) modify the documentation to suggest changing drivers by specifying the
>> full path to the driver instead of connection_type
>> d) rename the connection classes to something reasonable representing
>> drivers (libvirt.driver:LibvirtDriver(**) vs libvirt.connection.**
>> LibvirtConnection)
>> e) bonus points if it could be done with a short path for ease of use
>> (compute_driver=libvirt.**LibvirtDriver vs compute_driver=nova.virt.**
>> libvirt.driver.LibvirtDriver)
>>
>
> On point c), is the long term view that .conf options are going to specify
> full class names? It seems like this actually gets kind of confusing to
> admins.
>
>
> What are your thoughts on the following approach, which is related, but a
> little different?
>
> a) have compute_driver take a module name in nova.virt. which is loaded
> with some standard construction method that all drivers would implement in
> their __init__.py. Match all existing module names to connection_type names
> current in use. Basically just jump to e, but also make all drivers conform
> some factory interface so "libvirt" is actually enough to get you
> nova.virt.libvirt.connect()
>

Andrew Bogott is working on a common plugin architecture. Under that system
plugins will have well-known, but short names and be loaded using
setuptools entry points (allowing them to be named independently of their
code/filesystem layout and packaged and installed separately from
nova). Could the drivers be loaded from these plugins?


>
> b) if compute_driver is not specified, use connection_type, but spit out a
> deprecation warning that the option is going away. (Removed fully in G).
> Because compute_drivers map to existing connection_types this just works
> with only a little refactoring in the drivers.
>
> c) remove nova/virt/connection.py
>
> The end result is that every driver is a self contained subdir in
> nova/virt/DRIVERNAME/.
>
>
>  * one test fails for Fake in test_virt_drivers, but only when it's run as
>>> the full unit test, not when run on it's own. It looks like it has to do
>>> with FakeConnection.instance() caching, which actually confuses me a bit,
>>> as I would have assumed one unit test file couldn't affect another (i.e.
>>> they started a clean env each time).
>>>
>>
>> Generally breakage like this is due to some global state that is not
>> cleaned up, so if FakeConnection is caching globally, then this could
>> happen.
>>
>
> It is keeping global state, I'll look at fixing that independently.
>
>
>        -Sean
>
> --
> Sean Dague
> IBM Linux Technology Center
> email: sdague at linux.vnet.ibm.com
> alt-email: sldague at us.ibm.com
>
>
> ______________________________**_________________
> Mailing list: https://launchpad.net/~**openstack<https://launchpad.net/~openstack>
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~**openstack<https://launchpad.net/~openstack>
> More help   : https://help.launchpad.net/**ListHelp<https://help.launchpad.net/ListHelp>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120518/9e5cbc83/attachment.html>


More information about the Openstack mailing list