[openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

Christopher Lefelhocz christopher.lefelhoc at RACKSPACE.COM
Tue Apr 29 03:31:57 UTC 2014


Waiting on review.openstack.org to come back live to look at the demo code and provide more accurate feedback…

Interesting and good to hear the code moved easily.  The possibility to have a functional a common image transfer service wasn't questioned (IMHO).  What I was stating was that we'll need strong data to point to how the common code doesn't degrade download performance for various driver/deployments.  I do think having a common set of configuration (and driver calls?) for the download options makes a lot of sense (like glance has done for image_service.download).  I'm just a little more cautious when it comes to true common download code at this point.

Christopher

From: Sheng Bo Hou <sbhou at cn.ibm.com<mailto:sbhou at cn.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Sunday, April 27, 2014 9:33 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

I have done a little test for the image download and upload. I created an API for the image access, containing copyFrom and sendTo. I moved the image download and upload code from XenApi into the implementation for Http with some modifications, and the code worked for libvirt as well.
copyFrom means to download the image and return the image data, and different hypervisors can choose to save it in a file or import it to the datastore; sendTo is used to upload the image and the image data is passed in as a parameter.

I also did an investigation about how each hypervisor is doing the image upload and download.

For the download:
libvirt, hyper-v and baremetal use the code image_service.download to download the image and save it into a file.
vmwareapi uses the code image_service.download to download the image and import it into the datastore.
XenAPi uses image_service.download to download the image for VHD image.

For the upload:
They use image_service.upload to upload the image.

I think we can conclude that it is possible to have a common image transfer library with different implementations for different protocols.
This is a small demo code for the library: https://review.openstack.org/#/c/90601/(Jay, is it close to the library as you mentioned?). I just replaced the upload and download part with the http implementation for the imageapi and it worked fine.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM at IBMCN    E-mail: sbhou at cn.ibm.com<mailto:sbhou at cn.ibm.com>
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层邮编:100193


Solly Ross <sross at redhat.com<mailto:sross at redhat.com>>

2014/04/25 01:46
Please respond to
"OpenStack Development Mailing List \(not for usage questions\)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>




To
        "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>,
cc

Subject
        Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder







Something to be aware of when planing an image transfer library is that individual drivers
might have optimized support for image transfer in certain cases (especially when dealing
with transferring between different formats, like raw to qcow2, etc).  This builds on what
Christopher was saying -- there's actually a reason why we have code for each driver.  While
having a common image copying library would be nice, I think a better way to do it would be to
have some sort of library composed of building blocks, such that each driver could make use of
common functionality while still tailoring the operation to the quirks of the particular drivers.

Best Regards,
Solly Ross

----- Original Message -----
From: "Christopher Lefelhocz" <christopher.lefelhoc at RACKSPACE.COM<mailto:christopher.lefelhoc at RACKSPACE.COM>>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Sent: Thursday, April 24, 2014 11:17:41 AM
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

Apologies for coming to this discussion late...

On 4/22/14 6:21 PM, "Jay Pipes" <jaypipes at gmail.com<mailto:jaypipes at gmail.com>> wrote:

>
>Right, a good solution would allow for some flexibility via multiple
>transfer drivers.

+1. In particular I don't think this discussion should degenerate into
zero-copy vs. pre caching.  I see both as possible solutions depending on
deployer/environment needs.

>
>> Jay Pipes has suggested we figure out a blueprint for a separate
>> library dedicated to the data(byte) transfer, which may be put in oslo
>> and used by any projects in need (Hoping Jay can come in:-)). Huiba,
>> Zhiyan, everyone else, do you think we come up with a blueprint about
>> the data transfer in oslo can work?
>
>Yes, so I believe the most appropriate solution is to create a library
>-- in oslo or a standalone library like taskflow -- that would offer a
>simple byte streaming library that could be used by nova.image to expose
>a neat and clean task-based API.
>
>Right now, there is a bunch of random image transfer code spread
>throughout nova.image and in each of the virt drivers there seems to be
>different re-implementations of similar functionality. I propose we
>clean all that up and have nova.image expose an API so that a virt
>driver could do something like this:
>
>from nova.image import api as image_api
>
>...
>
>task = image_api.copy(from_path_or_uri, to_path_or_uri)
># do some other work
>copy_task_result = task.wait()
>
>Within nova.image.api.copy(), we would use the aforementioned transfer
>library to move the image bits from the source to the destination using
>the most appropriate method.

If I understand correctly, we'll create some common library around this.
It would be good to understand the details a bit better.  I've thought a
bit about this issue.  The one area that I get stuck at is providing a
common set of downloads which work across drivers effectively.  Part of
the reason there are a bunch or random image transfers is historical, but
also because performance was already a problem.  Examples include:
transferring to compute first then copying to dom0 causing performance
issues, needs in some drivers to download image completely to validate
prior to putting in place, etc.

It may be easy to say we'll push most of this to the dom0, but I know for
Xen our python stack is somewhat limited so that may be an issue.

By the way we've been working on proposing a simpler image pre caching
system/strategy.  It focuses specifically on the image caching portion of
this discussion.  For those interested, see the nova-spec
https://review.openstack.org/#/c/85792.  We'd like to leverage whatever
optimized image download strategy is available.

Christopher


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140429/7b2b12a2/attachment.html>


More information about the OpenStack-dev mailing list