<font size=2 face="sans-serif">I actually support the idea Huiba has proposed,
and I am thinking of how to optimize the large data transfer(for example,
100G in a short time) as well.</font>
<br><font size=2 face="sans-serif">I registered two blueprints in nova-specs,
one is for an image upload plug-in to upload the image to glance(</font><a href=https://review.openstack.org/#/c/84671/><font size=2 color=blue face="sans-serif">https://review.openstack.org/#/c/84671/</font></a><font size=2 face="sans-serif">),
the other is a data transfer plug-in(</font><a href=https://review.openstack.org/#/c/87207/><font size=2 color=blue face="sans-serif">https://review.openstack.org/#/c/87207/</font></a><font size=2 face="sans-serif">)
for data migration among nova nodes. I would like to see other transfer
protocols, like FTP, bitTorrent, p2p, etc, implemented for data transfer
in OpenStack besides HTTP.</font>
<br>
<br><font size=2 face="sans-serif">Data transfer may have many use cases.
I summarize them into two catalogs. Please feel free to comment on it.
</font>
<br><font size=2 face="sans-serif">1. The machines are located in one network,
e.g. one domain, one cluster, etc. The characteristic is the machines can
access each other directly via the IP addresses(VPN is beyond consideration).
In this case, data can be transferred via iSCSI, NFS, and definitive zero-copy
as Zhiyan mentioned. </font>
<br><font size=2 face="sans-serif">2. The machines are located in different
networks, e.g. two data centers, two firewalls, etc. The characteristic
is the machines can not access each other directly via the IP addresses(VPN
is beyond consideration). The machines are isolated, so they can not be
connected with iSCSI, NFS, etc. In this case, data have to go via the protocols,
like HTTP, FTP, p2p, etc. I am not sure whether zero-copy can work for
this case. Zhiyan, please help me with this doubt.</font>
<br>
<br><font size=2 face="sans-serif">I guess for data transfer, including
image downloading, image uploading, live migration, etc, OpenStack needs
to taken into account the above two catalogs for data transfer. It is hard
to say that one protocol is better than another, and one approach prevails
another(BitTorrent is very cool, but if there is only one source and only
one target, it would not be that faster than a direct FTP). The key is
the use case(FYI:</font><a href="http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/"><font size=2 color=blue face="sans-serif">
http://amigotechnotes.wordpress.com/2013/12/23/file-transmission-with-different-sharing-solution-on-nas/</font></a><font size=2 face="sans-serif">).
</font>
<br>
<br><font size=2 face="sans-serif">Jay Pipes has suggested we figure out
a blueprint for a separate library dedicated to the data(byte) transfer,
which may be put in oslo and used by any projects in need (Hoping Jay can
come in:-)). Huiba, Zhiyan, everyone else, do you think we come up with
a blueprint about the data transfer in oslo can work? </font>
<br>
<br><font size=2 face="sans-serif">Best wishes,<br>
Vincent Hou (侯胜博)<br>
<br>
Staff Software Engineer, Open Standards and Open Source Team, Emerging
Technology Institute, IBM China Software Development Lab<br>
<br>
Tel: 86-10-82450778 Fax: 86-10-82453660<br>
Notes ID: Sheng Bo Hou/China/IBM@IBMCN E-mail: sbhou@cn.ibm.com
<br>
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
West Road, Haidian District, Beijing, P.R.C.100193<br>
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层
邮编:100193</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td width=40%><font size=1 face="sans-serif"><b>Zhi Yan Liu <lzy.dev@gmail.com></b>
</font>
<p><font size=1 face="sans-serif">2014/04/18 23:33</font>
<table border>
<tr valign=top>
<td bgcolor=white>
<div align=center><font size=1 face="sans-serif">Please respond to<br>
"OpenStack Development Mailing List \(not for usage questions\)"
<openstack-dev@lists.openstack.org></font></div></table>
<br>
<td width=59%>
<table width=100%>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">To</font></div>
<td><font size=1 face="sans-serif">"OpenStack Development Mailing
List (not for usage questions)" <openstack-dev@lists.openstack.org>,
</font>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">cc</font></div>
<td>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">Subject</font></div>
<td><font size=1 face="sans-serif">Re: [openstack-dev] [Nova][blueprint]
Accelerate the booting process of a number of vms via VMThunder</font></table>
<br>
<table>
<tr valign=top>
<td>
<td></table>
<br></table>
<br>
<br>
<br><tt><font size=2>On Fri, Apr 18, 2014 at 10:52 PM, lihuiba <magazine.lihuiba@163.com>
wrote:<br>
>>btw, I see but at the moment we had fixed it by network interface<br>
>>device driver instead of workaround - to limit network traffic
slow<br>
>>down.<br>
> Which kind of driver, in host kernel, in guest kernel or in openstack?<br>
><br>
<br>
In compute host kernel, doesn't related with OpenStack.<br>
<br>
><br>
><br>
>>There are few works done in Glance<br>
>>(</font></tt><a href="https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver"><tt><font size=2>https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver</font></tt></a><tt><font size=2>
),<br>
>>but some work still need to be taken I'm sure. There are something
on<br>
>>drafting, and some dependencies need to be resolved as well.<br>
> I read the blueprints carefully, but still have some doubts.<br>
> Will it store an image as a single volume in cinder? Or store all
image<br>
<br>
Yes<br>
<br>
> files<br>
> in one shared volume (with a file system on the volume, of course)?<br>
> Openstack already has support to convert an image to a volume, and
to boot<br>
> from a volume. Are these features similar to this blueprint?<br>
<br>
Not similar but it could be leverage for this case.<br>
<br>
><br>
<br>
I prefer to talk this details in IRC. (And I had read all VMThunder<br>
code at today early (my timezone), there are some questions from me as<br>
well)<br>
<br>
zhiyan<br>
<br>
><br>
> Huiba Li<br>
><br>
> National Key Laboratory for Parallel and Distributed<br>
> Processing, College of Computer Science, National University of Defense<br>
> Technology, Changsha, Hunan Province, P.R. China<br>
> 410073<br>
><br>
><br>
> At 2014-04-18 12:14:25,"Zhi Yan Liu" <lzy.dev@gmail.com>
wrote:<br>
>>On Fri, Apr 18, 2014 at 10:53 AM, lihuiba <magazine.lihuiba@163.com>
wrote:<br>
>>>>It's not 100% true, in my case at last. We fixed this problem
by<br>
>>>>network interface driver, it causes kernel panic and readonly
issues<br>
>>>>under heavy networking workload actually.<br>
>>><br>
>>> Network traffic control could help. The point is to ensure
no instance<br>
>>> is starved to death. Traffic control can be done with tc.<br>
>>><br>
>><br>
>>btw, I see but at the moment we had fixed it by network interface<br>
>>device driver instead of workaround - to limit network traffic
slow<br>
>>down.<br>
>><br>
>>><br>
>>><br>
>>>>btw, we are doing some works to make Glance to integrate
Cinder as a<br>
>>>>unified block storage<br>
>>> backend.<br>
>>> That sounds interesting. Is there some more materials?<br>
>>><br>
>><br>
>>There are few works done in Glance<br>
>>(</font></tt><a href="https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver"><tt><font size=2>https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver</font></tt></a><tt><font size=2>
),<br>
>>but some work still need to be taken I'm sure. There are something
on<br>
>>drafting, and some dependencies need to be resolved as well.<br>
>><br>
>>><br>
>>><br>
>>> At 2014-04-18 06:05:23,"Zhi Yan Liu" <lzy.dev@gmail.com>
wrote:<br>
>>>>Replied as inline comments.<br>
>>>><br>
>>>>On Thu, Apr 17, 2014 at 9:33 PM, lihuiba <magazine.lihuiba@163.com><br>
>>>> wrote:<br>
>>>>>>IMO we'd better to use backend storage optimized
approach to access<br>
>>>>>>remote image from compute node instead of using
iSCSI only. And from<br>
>>>>>>my experience, I'm sure iSCSI is short of stability
under heavy I/O<br>
>>>>>>workload in product environment, it could causes
either VM filesystem<br>
>>>>>>to be marked as readonly or VM kernel panic.<br>
>>>>><br>
>>>>> Yes, in this situation, the problem lies in the backend
storage, so no<br>
>>>>> other<br>
>>>>><br>
>>>>> protocol will perform better. However, P2P transferring
will greatly<br>
>>>>> reduce<br>
>>>>><br>
>>>>> workload on the backend storage, so as to increase
responsiveness.<br>
>>>>><br>
>>>><br>
>>>>It's not 100% true, in my case at last. We fixed this problem
by<br>
>>>>network interface driver, it causes kernel panic and readonly
issues<br>
>>>>under heavy networking workload actually.<br>
>>>><br>
>>>>><br>
>>>>><br>
>>>>>>As I said currently Nova already has image caching
mechanism, so in<br>
>>>>>>this case P2P is just an approach could be used
for downloading or<br>
>>>>>>preheating for image caching.<br>
>>>>><br>
>>>>> Nova's image caching is file level, while VMThunder's
is block-level.<br>
>>>>> And<br>
>>>>><br>
>>>>> VMThunder is for working in conjunction with Cinder,
not Glance.<br>
>>>>> VMThunder<br>
>>>>><br>
>>>>> currently uses facebook's flashcache to realize caching,
and dm-cache,<br>
>>>>><br>
>>>>> bcache are also options in the future.<br>
>>>>><br>
>>>><br>
>>>>Hm if you say bcache, dm-cache and flashcache, I'm just
thinking if<br>
>>>>them could be leveraged by operation/best-practice level.<br>
>>>><br>
>>>>btw, we are doing some works to make Glance to integrate
Cinder as a<br>
>>>>unified block storage backend.<br>
>>>><br>
>>>>><br>
>>>>>>I think P2P transferring/pre-caching sounds
a good way to go, as I<br>
>>>>>>mentioned as well, but actually for the area I'd
like to see something<br>
>>>>>>like zero-copy + CoR. On one hand we can leverage
the capability of<br>
>>>>>>on-demand downloading image bits by zero-copy approach,
on the other<br>
>>>>>>hand we can prevent to reading data from remote
image every time by<br>
>>>>>>CoR.<br>
>>>>><br>
>>>>> Yes, on-demand transferring is what you mean by "zero-copy",
and<br>
>>>>> caching<br>
>>>>> is something close to CoR. In fact, we are working
on a kernel module<br>
>>>>> called<br>
>>>>> foolcache that realize a true CoR. See<br>
>>>>> </font></tt><a href="https://github.com/lihuiba/dm-foolcache"><tt><font size=2>https://github.com/lihuiba/dm-foolcache</font></tt></a><tt><font size=2>.<br>
>>>>><br>
>>>><br>
>>>>Yup. And it's really interesting to me, will take a look,
thanks for<br>
>>>> sharing.<br>
>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> National Key Laboratory for Parallel and Distributed<br>
>>>>> Processing, College of Computer Science, National
University of Defense<br>
>>>>> Technology, Changsha, Hunan Province, P.R. China<br>
>>>>> 410073<br>
>>>>><br>
>>>>><br>
>>>>> At 2014-04-17 17:11:48,"Zhi Yan Liu" <lzy.dev@gmail.com>
wrote:<br>
>>>>>>On Thu, Apr 17, 2014 at 4:41 PM, lihuiba <magazine.lihuiba@163.com><br>
>>>>>> wrote:<br>
>>>>>>>>IMHO, zero-copy approach is better<br>
>>>>>>> VMThunder's "on-demand transferring"
is the same thing as your<br>
>>>>>>> "zero-copy<br>
>>>>>>> approach".<br>
>>>>>>> VMThunder is uses iSCSI as the transferring
protocol, which is option<br>
>>>>>>> #b<br>
>>>>>>> of<br>
>>>>>>> yours.<br>
>>>>>>><br>
>>>>>><br>
>>>>>>IMO we'd better to use backend storage optimized
approach to access<br>
>>>>>>remote image from compute node instead of using
iSCSI only. And from<br>
>>>>>>my experience, I'm sure iSCSI is short of stability
under heavy I/O<br>
>>>>>>workload in product environment, it could causes
either VM filesystem<br>
>>>>>>to be marked as readonly or VM kernel panic.<br>
>>>>>><br>
>>>>>>><br>
>>>>>>>>Under #b approach, my former experience
from our previous similar<br>
>>>>>>>>Cloud deployment (not OpenStack) was that:
under 2 PC server storage<br>
>>>>>>>>nodes (general *local SAS disk*, without
any storage backend) +<br>
>>>>>>>>2-way/multi-path iSCSI + 1G network bandwidth,
we can provisioning<br>
>>>>>>>> 500<br>
>>>>>>>>VMs in a minute.<br>
>>>>>>> suppose booting one instance requires reading
300MB of data, so 500<br>
>>>>>>> ones<br>
>>>>>>> require 150GB. Each of the storage server
needs to send data at a<br>
>>>>>>> rate<br>
>>>>>>> of<br>
>>>>>>> 150GB/2/60 = 1.25GB/s on average. This is
absolutely a heavy burden<br>
>>>>>>> even<br>
>>>>>>> for high-end storage appliances. In production
systems, this request<br>
>>>>>>> (booting<br>
>>>>>>> 500 VMs in one shot) will significantly disturb
other running<br>
>>>>>>> instances<br>
>>>>>>> accessing the same storage nodes.<br>
>>>>>>><br>
>>>><br>
>>>>btw, I believe the case/numbers is not true as well, since
remote<br>
>>>>image bits could be loaded on-demand instead of load them
all on boot<br>
>>>>stage.<br>
>>>><br>
>>>>zhiyan<br>
>>>><br>
>>>>>>> VMThunder eliminates this problem by P2P transferring
and<br>
>>>>>>> on-compute-node<br>
>>>>>>> caching. Even a pc server with one 1gb NIC
(this is a true pc<br>
>>>>>>> server!)<br>
>>>>>>> can<br>
>>>>>>> boot<br>
>>>>>>> 500 VMs in a minute with ease. For the first
time, VMThunder makes<br>
>>>>>>> bulk<br>
>>>>>>> provisioning of VMs practical for production
cloud systems. This is<br>
>>>>>>> the<br>
>>>>>>> essential<br>
>>>>>>> value of VMThunder.<br>
>>>>>>><br>
>>>>>><br>
>>>>>>As I said currently Nova already has image caching
mechanism, so in<br>
>>>>>>this case P2P is just an approach could be used
for downloading or<br>
>>>>>>preheating for image caching.<br>
>>>>>><br>
>>>>>>I think P2P transferring/pre-caching sounds
a good way to go, as I<br>
>>>>>>mentioned as well, but actually for the area I'd
like to see something<br>
>>>>>>like zero-copy + CoR. On one hand we can leverage
the capability of<br>
>>>>>>on-demand downloading image bits by zero-copy approach,
on the other<br>
>>>>>>hand we can prevent to reading data from remote
image every time by<br>
>>>>>>CoR.<br>
>>>>>><br>
>>>>>>zhiyan<br>
>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> ===================================================<br>
>>>>>>> From: Zhi Yan Liu <lzy.dev@gmail.com><br>
>>>>>>> Date: 2014-04-17 0:02 GMT+08:00<br>
>>>>>>> Subject: Re: [openstack-dev] [Nova][blueprint]
Accelerate the booting<br>
>>>>>>> process of a number of vms via VMThunder<br>
>>>>>>> To: "OpenStack Development Mailing List
(not for usage questions)"<br>
>>>>>>> <openstack-dev@lists.openstack.org><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> Hello Yongquan Fu,<br>
>>>>>>><br>
>>>>>>> My thoughts:<br>
>>>>>>><br>
>>>>>>> 1. Currently Nova has already supported image
caching mechanism. It<br>
>>>>>>> could caches the image on compute host which
VM had provisioning from<br>
>>>>>>> it before, and next provisioning (boot same
image) doesn't need to<br>
>>>>>>> transfer it again only if cache-manger clear
it up.<br>
>>>>>>> 2. P2P transferring and prefacing is something
that still based on<br>
>>>>>>> copy mechanism, IMHO, zero-copy approach is
better, even<br>
>>>>>>> transferring/prefacing could be optimized
by such approach. (I have<br>
>>>>>>> not check "on-demand transferring"
of VMThunder, but it is a kind of<br>
>>>>>>> transferring as well, at last from its literal
meaning).<br>
>>>>>>> And btw, IMO, we have two ways can go follow
zero-copy idea:<br>
>>>>>>> a. when Nova and Glance use same backend storage,
we could use<br>
>>>>>>> storage<br>
>>>>>>> special CoW/snapshot approach to prepare VM
disk instead of<br>
>>>>>>> copy/transferring image bits (through HTTP/network
or local copy).<br>
>>>>>>> b. without "unified" storage, we
could attach volume/LUN to compute<br>
>>>>>>> node from backend storage as a base image,
then do such CoW/snapshot<br>
>>>>>>> on it to prepare root/ephemeral disk of VM.
This way just like<br>
>>>>>>> boot-from-volume but different is that we
do CoW/snapshot on Nova<br>
>>>>>>> side<br>
>>>>>>> instead of Cinder/storage side.<br>
>>>>>>><br>
>>>>>>> For option #a, we have already got some progress:<br>
>>>>>>> </font></tt><a href="https://blueprints.launchpad.net/nova/+spec/image-multiple-location"><tt><font size=2>https://blueprints.launchpad.net/nova/+spec/image-multiple-location</font></tt></a><tt><font size=2><br>
>>>>>>> </font></tt><a href="https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler"><tt><font size=2>https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler</font></tt></a><tt><font size=2><br>
>>>>>>><br>
>>>>>>> </font></tt><a href="https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler"><tt><font size=2>https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler</font></tt></a><tt><font size=2><br>
>>>>>>><br>
>>>>>>> Under #b approach, my former experience from
our previous similar<br>
>>>>>>> Cloud deployment (not OpenStack) was that:
under 2 PC server storage<br>
>>>>>>> nodes (general *local SAS disk*, without any
storage backend) +<br>
>>>>>>> 2-way/multi-path iSCSI + 1G network bandwidth,
we can provisioning<br>
>>>>>>> 500<br>
>>>>>>> VMs in a minute.<br>
>>>>>>><br>
>>>>>>> For vmThunder topic I think it sounds a good
idea, IMO P2P, prefacing<br>
>>>>>>> is one of optimized approach for image transferring
valuably.<br>
>>>>>>><br>
>>>>>>> zhiyan<br>
>>>>>>><br>
>>>>>>> On Wed, Apr 16, 2014 at 9:14 PM, yongquan
Fu <quanyongf@gmail.com><br>
>>>>>>> wrote:<br>
>>>>>>>><br>
>>>>>>>> Dear all,<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> We would like to present an extension
to the vm-booting<br>
>>>>>>>> functionality<br>
>>>>>>>> of<br>
>>>>>>>> Nova when a number of homogeneous vms
need to be launched at the<br>
>>>>>>>> same<br>
>>>>>>>> time.<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> The motivation for our work is to increase
the speed of provisioning<br>
>>>>>>>> vms<br>
>>>>>>>> for<br>
>>>>>>>> large-scale scientific computing and big
data processing. In that<br>
>>>>>>>> case,<br>
>>>>>>>> we<br>
>>>>>>>> often need to boot tens and hundreds virtual
machine instances at<br>
>>>>>>>> the<br>
>>>>>>>> same<br>
>>>>>>>> time.<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Currently, under the Openstack,
we found that creating a large<br>
>>>>>>>> number<br>
>>>>>>>> of<br>
>>>>>>>> virtual machine instances is very time-consuming.
The reason is the<br>
>>>>>>>> booting<br>
>>>>>>>> procedure is a centralized operation that
involve performance<br>
>>>>>>>> bottlenecks.<br>
>>>>>>>> Before a virtual machine can be actually
started, OpenStack either<br>
>>>>>>>> copy<br>
>>>>>>>> the<br>
>>>>>>>> image file (swift) or attach the image
volume (cinder) from storage<br>
>>>>>>>> server<br>
>>>>>>>> to compute node via network. Booting a
single VM need to read a<br>
>>>>>>>> large<br>
>>>>>>>> amount<br>
>>>>>>>> of image data from the image storage server.
So creating a large<br>
>>>>>>>> number<br>
>>>>>>>> of<br>
>>>>>>>> virtual machine instances would cause
a significant workload on the<br>
>>>>>>>> servers.<br>
>>>>>>>> The servers become quite busy even unavailable
during the deployment<br>
>>>>>>>> phase.<br>
>>>>>>>> It would consume a very long time before
the whole virtual machine<br>
>>>>>>>> cluster<br>
>>>>>>>> useable.<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Our extension is based on our work
on vmThunder, a novel mechanism<br>
>>>>>>>> accelerating the deployment of large number
virtual machine<br>
>>>>>>>> instances.<br>
>>>>>>>> It<br>
>>>>>>>> is<br>
>>>>>>>> written in Python, can be integrated with
OpenStack easily.<br>
>>>>>>>> VMThunder<br>
>>>>>>>> addresses the problem described above
by following improvements:<br>
>>>>>>>> on-demand<br>
>>>>>>>> transferring (network attached storage),
compute node caching, P2P<br>
>>>>>>>> transferring and prefetching. VMThunder
is a scalable and<br>
>>>>>>>> cost-effective<br>
>>>>>>>> accelerator for bulk provisioning of virtual
machines.<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> We hope to receive your feedbacks.
Any comments are extremely<br>
>>>>>>>> welcome.<br>
>>>>>>>> Thanks in advance.<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> PS:<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> VMThunder enhanced nova blueprint:<br>
>>>>>>>> </font></tt><a href="https://blueprints.launchpad.net/nova/+spec/thunderboost"><tt><font size=2>https://blueprints.launchpad.net/nova/+spec/thunderboost</font></tt></a><tt><font size=2><br>
>>>>>>>><br>
>>>>>>>> VMThunder standalone project: </font></tt><a href=https://launchpad.net/vmthunder><tt><font size=2>https://launchpad.net/vmthunder</font></tt></a><tt><font size=2>;<br>
>>>>>>>><br>
>>>>>>>> VMThunder prototype: </font></tt><a href=https://github.com/lihuiba/VMThunder><tt><font size=2>https://github.com/lihuiba/VMThunder</font></tt></a><tt><font size=2><br>
>>>>>>>><br>
>>>>>>>> VMThunder etherpad: </font></tt><a href=https://etherpad.openstack.org/p/vmThunder><tt><font size=2>https://etherpad.openstack.org/p/vmThunder</font></tt></a><tt><font size=2><br>
>>>>>>>><br>
>>>>>>>> VMThunder portal: </font></tt><a href=http://www.vmthunder.org/><tt><font size=2>http://www.vmthunder.org/</font></tt></a><tt><font size=2><br>
>>>>>>>><br>
>>>>>>>> VMThunder paper:<br>
>>>>>>>> </font></tt><a href=http://www.computer.org/csdl/trans/td/preprint/06719385.pdf><tt><font size=2>http://www.computer.org/csdl/trans/td/preprint/06719385.pdf</font></tt></a><tt><font size=2><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Regards<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> vmThunder development group<br>
>>>>>>>><br>
>>>>>>>> PDL<br>
>>>>>>>><br>
>>>>>>>> National University of Defense
Technology<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> _______________________________________________<br>
>>>>>>>> OpenStack-dev mailing list<br>
>>>>>>>> OpenStack-dev@lists.openstack.org<br>
>>>>>>>> </font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
>>>>>>>><br>
>>>>>>><br>
>>>>>>> _______________________________________________<br>
>>>>>>> OpenStack-dev mailing list<br>
>>>>>>> OpenStack-dev@lists.openstack.org<br>
>>>>>>> </font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> --<br>
>>>>>>> Yongquan Fu<br>
>>>>>>> PhD, Assistant Professor,<br>
>>>>>>> National Key Laboratory for Parallel and Distributed<br>
>>>>>>> Processing, College of Computer Science, National
University of<br>
>>>>>>> Defense<br>
>>>>>>> Technology, Changsha, Hunan Province, P.R.
China<br>
>>>>>>> 410073<br>
>>>>>>><br>
>>>>>>> _______________________________________________<br>
>>>>>>> OpenStack-dev mailing list<br>
>>>>>>> OpenStack-dev@lists.openstack.org<br>
>>>>>>> </font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
>>>>>>><br>
>>>>>><br>
>>>>>>_______________________________________________<br>
>>>>>>OpenStack-dev mailing list<br>
>>>>>>OpenStack-dev@lists.openstack.org<br>
>>>>>></font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
>>>>><br>
>>>>><br>
>>>>> _______________________________________________<br>
>>>>> OpenStack-dev mailing list<br>
>>>>> OpenStack-dev@lists.openstack.org<br>
>>>>> </font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
>>>>><br>
>>>><br>
>>>>_______________________________________________<br>
>>>>OpenStack-dev mailing list<br>
>>>>OpenStack-dev@lists.openstack.org<br>
>>>></font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> OpenStack-dev mailing list<br>
>>> OpenStack-dev@lists.openstack.org<br>
>>> </font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
>>><br>
>><br>
>>_______________________________________________<br>
>>OpenStack-dev mailing list<br>
>>OpenStack-dev@lists.openstack.org<br>
>></font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> OpenStack-dev@lists.openstack.org<br>
> </font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
OpenStack-dev@lists.openstack.org<br>
</font></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><font size=2>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></tt></a><tt><font size=2><br>
<br>
</font></tt>
<br>