[openstack-dev] Island: Local Storage Volume Support for Cinder
Rongze Zhu
zrzhit at gmail.com
Thu Nov 8 03:07:56 UTC 2012
On Fri, Nov 2, 2012 at 5:55 PM, Christoph Hellwig <hch at infradead.org> wrote:
> On Thu, Nov 01, 2012 at 12:03:10AM +0800, Rongze Zhu wrote:
> > 1. You are right, it just requires basic Posix filesystems APIs + Linux
> > xattrs.
> > But you know that naming is very hard, so I temporarily name it to
> 'ext4'.
> > In
> > the future, I will give it a new name.
>
> I'd say just stick to localfs or similar as that is most descriptive.
>
> > 2. I am not familiar with md5 and sha1, but md5 is only used for
> checksum,
> > why
> > it is a bit dangerous? I'm confused :)
>
> MD5 is an old hashing algorithm for which it's fairly easy to find hash
> collisions. I actually misread the code and thought you'd use the hash
> for content addressing, in which case this would be a real issue. If
> it's just a checksum against corruption it's not bad, but I'd still
> recommend using a better hash algorithm these days.
>
> > 3. If qcow3 format is comming, I also need to implement qcow3 python
> > library.
>
> "Qcow3" is a set of extensions to the qcow2 format. The format is still
> handled by the same driver, just the version field in the header is
> bumped to 3. If I'm reading the python code in your branch correctly
> there currently is no support for that, but no check to reject it
> either.
>
+1, I will check version field in my code. Thanks your suggestion.
>
> > I only read snapshot data and metadata from image, qemu concurrent I/O
> will
> > not
> > modify snapshot data and metadata, so it is consistency.
>
> I'd like a bit more documentation an asserts that this is really the
> case.
>
>
I had read the qemu-kvm code, it is really the case:)
> In addition I have another question, although that might be more towards
> the nova gurus on the list: What guarantees no I/O is pending when
> the create_snapshot method is called? Just like in a driver I'm writing
> right now you call qem-u-img snapshot from the create_snapshot method,
> and I'm again worried about qemu accessing it concurrently. The same
> issue also applied to the delete_snapshot method. I've been trying to
> dig into this, but the API documentation for cinder seems severly
> lackingunfortunately.
>
>
I had read qemu-kvmq code, emu-kvm have a io-thread for handling timers,
processing I/O, and responding to monitor commands. When qemu-kvm handle
monitor command, io-thread will call qemu_mutex_lock_iothread() to lock
the emulated devices, so vcpu threads cannot access the emulated devices.
In a word, we use qemu_mutex_lock_iothread()/qemu_mutex_unlock_iothread()
to protect the race to access the emulated dev launched by vcpu threads &
iothread. You can get more detail from h
ttp://web.archiveorange.com/archive/v/1XS1vRhfyKIEzDUVAgUP and
http://comments.gmane.org/gmane.comp.emulators.qemu/68855 .
> 4. I agree with you, it is best that we can export snapshot data and
> > metadata from image by monitor command.
>
> I'd suggest going there, as it means all format specific are hidden
> in qemu, this would also allow to support e.g. vhd or vmware images
> without changes to your openstack code.
>
> You already need a qemu patch anyway so I don't think it's a huge issue.
>
> Thinking about qemu I think your initial provisioning in the
> initialize_connection could also be greatly simplified by using the
> copy on read support in recent qemu. This uses the qemu backing device
> support, but updates the overlay image not only on writes but pages in
> data on reads as well and thus gradually moves the data to be local.
>
+1, greatly! Thank your suggestion, I will pay attention on it. :)
--
Rongze Zhu - 朱荣泽
Twitter: @metaxen
Blog: http://blog.csdn.net/metaxen
Weibo: http://weibo.com/metaxen
Website: Try Free OpenStack in http://www.stacklab.org<http://stacklab.org>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121108/b74a7496/attachment-0001.html>
More information about the OpenStack-dev
mailing list