[openstack-dev] [nova] Plan to consolidate FS-style libvirt volume drivers under a common base class
Matt Riedemann
mriedem at linux.vnet.ibm.com
Wed Jun 17 16:37:04 UTC 2015
On 6/17/2015 4:46 AM, Daniel P. Berrange wrote:
> On Tue, Jun 16, 2015 at 04:21:16PM -0500, Matt Riedemann wrote:
>> The NFS, GlusterFS, SMBFS, and Quobyte libvirt volume drivers are all very
>> similar.
>>
>> I want to extract a common base class that abstracts some of the common code
>> and then let the sub-classes provide overrides where necessary.
>>
>> As part of this, I'm wondering if we could just have a single
>> 'mount_point_base' config option rather than one per backend like we have
>> today:
>>
>> nfs_mount_point_base
>> glusterfs_mount_point_base
>> smbfs_mount_point_base
>> quobyte_mount_point_base
>>
>> With libvirt you can only have one of these drivers configured per compute
>> host right? So it seems to make sense that we could have one option used
>> for all 4 different driver implementations and reduce some of the config
>> option noise.
>
> Doesn't cinder support multiple different backends to be used ? I was always
> under the belief that it did, and thus Nova had to be capable of using any
> of its volume drivers concurrently.
Yeah, I forgot about this and it was pointed out elsewhere in this
thread so I'm going to drop the common mount_point_base option idea.
>
>> Are there any concerns with this?
>
> Not a concern, but since we removed the 'volume_drivers' config parameter,
> we're now free to re-arrange the code too. I'd like use to create a subdir
> nova/virt/libvirt/volume and create one file in that subdir per driver
> that we have.
Sure, I'll do that as part of this work, the remotefs and quobyte
modules can probably also live in there. We could also arguably move
the nova.virt.libvirt.lvm and nova.virt.libvirt.dmcrypt modules into
nova/virt/libvirt/volume as well.
>
>> Is a blueprint needed for this refactor?
>
> Not from my POV. We've just done a huge libvirt driver refactor by adding
> the Guest.py module without any blueprint.
>
> Regards,
> Daniel
>
--
Thanks,
Matt Riedemann
More information about the OpenStack-dev
mailing list