[cinder] last call for ussuri spec comments

Fang, Liang A liang.a.fang at intel.com
Sat Feb 1 10:10:39 UTC 2020


Hi Sean

Thanks for your comment.
Currently only rbd and sheepdog are mounted directly by qemu. Others (including nvmeof) are mounted to host OS first. See:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L169

rbd is popular today. It's a pity that rbd would not be supported by volume local cache.
The advantage to mount directly by qemu is security, right? The volume data would not be exposed to host OS. 

But rbd latency is not good (more than 1 millisecond). On the other hand, if Optane SSD (~10us) is used as volume local cache, RandRead latency would be ~50us when cache hit rate raised up to ~95%.
If persistent memory (with latency ~0.x us) is used as volume local cache, latency would be very much small (I have no data on hand, would measure after Chinese new year).
I believe the super performance boost would attract Operators. It is not impossible for some operators to change back rbd to mount to host os. At least we can give them the infrastructure-ready.

Regards
Liang

-----Original Message-----
From: Sean Mooney <smooney at redhat.com> 
Sent: Friday, January 31, 2020 8:19 AM
To: Brian Rosmaita <rosmaita.fossdev at gmail.com>; openstack-discuss at lists.openstack.org
Subject: Re: [cinder] last call for ussuri spec comments

On Thu, 2020-01-30 at 17:18 -0500, Brian Rosmaita wrote:
> On 1/30/20 11:27 AM, Brian Rosmaita wrote:
> > The following specs have two +2s.  I believe that all expressed 
> > concerns have been addressed.  I intend to merge them at 22:00 UTC 
> > today unless a serious issue is raised before then.
> > 
> > https://review.opendev.org/#/c/684556/ - support volume-local-cache
> 
> Some concerns were raised with the above patch.  Liang, please address 
> them.  Don't worry if you can't get them done before the Friday 
> deadline, I'm willing to give you a spec freeze exception.  I think 
> the concerns raised will be useful in making clarifications to the 
> spec, but also in pointing out things that reviewers should keep in 
> mind when reviewing the implementation.  They also point out some 
> testing directions that will be useful in validating the feature.

the one thing i want to raise related to this spec is that the design direction form the nova side is problematic. when reviewing https://review.opendev.org/#/c/689070/ it was noted that the nova libvirt driver has been moving away form mounting cinder volumes on the host and then passing that block device to qemu, in favor of using qemu's nataive ablity to connect directly to remote storage.

looking at the latest version of the nova spec
https://review.opendev.org/#/c/689070/8/specs/ussuri/approved/support-volume-local-cache.rst@49
i notes that this feature will be only capable of caching volums that have already been mounted on the host.

while keeping the management of the volumes in os-bricks means that the over all impact on nova is minimal considering that this feature would no longer work if we moved to useing qemu native isci support, and that it will not work with NVMEoF volume or ceph im not sure that the nova side will be approved.

when i first review the nova spec i mention that i believed local cacheing could a useful feature but this really feels like a capability that should be developed in qemu, specificly the applity to provide a second device as a cache for any disk deivce assgiend to an instance. that would allow local caching to be done regardless of the storage backend used.
qemu cannot do that today so i understand that this approch is in the short to medium term likely the only workable solution but i am concerned that the cinder side will be completed in ussuri and the nova side will not.

> 
> With respect to the other spec:
> 
> > https://review.opendev.org/#/c/700977 - add backup id to volume 
> > metadata
> 
> Rajat had a few vocabulary clarifications that can be addressed in a 
> follow-up patch.  Conceptually, this spec is fine, so I went ahead and 
> merged it.
> 
> > 
> > cheers,
> > brian
> 
> 
> 




More information about the openstack-discuss mailing list