[openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

Brian Rosmaita brian.rosmaita at RACKSPACE.COM
Mon Feb 22 14:12:42 UTC 2016


Hello everyone,

Joe, I think you are proposing a perfectly legitimate use case, but it's
not what the Glance community is calling "image import", and that's
leading to some confusion.

The Glance community has defined "image import" as: "A cloud end-user has
a bunch of bits that they want to give to Glance in the expectation that
(in the absence of error conditions) Glance will produce an Image (record,
file) tuple that can subsequently be used by other OpenStack services that
consume Images." [0]

The server-side image import workflow allows operators to validate the
bits an end-user has uploaded, with the extent of the validation performed
determined by the operator.  For example, a public cloud may wish to make
sure the bits are in the correct format for that cloud so that "bad"
images can be caught at import time, rather than at boot time, to ensure a
better user experience.

The use case you're talking about takes images that are already "in" a
cloud, for example, a snapshot of a server that's been configured exactly
the way you want it, and moving them to a different cloud.  In the past,
the Glance community has referred to this use case as "image cloning" (or
region-to-region image transfer).  There are some old design docs up on
the wiki discussing this (I think [1] gives a good outline and it's got
links to some other docs).  Those docs are from 2013, though, so they
can't be resurrected as-is since Glance has changed a bit in the meantime,
but you can look them over and at least see if I'm correct that image
cloning captures what you want.

As I said, the idea has been floated several times, but never got enough
traction to be implemented.  Maybe its time has come!

cheers,
brian


[0] 
http://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/ima
ge-import/image-import-refactor.html
[1] https://wiki.openstack.org/wiki/Glance-tasks-clone

On 2/21/16, 9:56 PM, "joehuang" <joehuang at huawei.com> wrote:

>Hello, Ian and Jay,
>
>The issue for the use case to address will be described more detail here:
>
>There are often dozens of data center in telecom operators, that means
>the number of data centers > 10 is quite normal, and these data centers
>are geo-graphically distributed, with lots of small edge data centers for
>fast media / data transferring.
>
>There are two ways to manage images in such a cloud with many geo-
>graphically distributed data centers involved:
>
>1. Use shared Glance for all data centers.
>  The Glance interface, driver and backend needs to support the
>distribution of image to all data centers, on demand or automatically.
>
>  Suppose a new image was uploaded to Glance in DC1(or to the backend
>storage in DC1, and register the location to Glance image), but the user
>wants to boot a new virtual machine in any of other datacenters, for
>example, DC2 or DC3, ... DCn. Do we have to download the image from DC1
>when booting a new VM for each other data center? Is there any data
>center level image cache mechanism supported in Glance image management?
> 
>  How to deal with the use case that creating an image from a VM (or
>volume) in DCn, but want to boot a VM(or volume) in DCm, under dozens of
>data center scenario?
>
>  Is there any driver and backend of Glance can replicate the image to
>dozens of data centers on demand or automatically? Not find such Glance
>driver/backend existing yet. Even single Swift instance is not able
>support dozens of data centers. Or have image repository outside Glance
>and upload the image to each data center one by one, then why we have to
>do duplicated image management outside Glance?
>
>  Is there any interface in Glance can indicate Glance to replicate image
>from one location to another location? No, not find such interface in
>Glance, not mention to driver/backend to support this.
>
>  To make the Glance registry / DB / API being distributed into dozens of
>data center is quite similar like that in KeyStone, where lightweight
>Fernet token is supported to enable the distribution. But the difference
>is how to deal with the bulk data of image, how to avoid downloading
>image cross data center each time.
>
>2. Use separated Glance for each data center with image import capability.
>  
>  An end user is able to import image from another Glance in another
>OpenStack cloud while sharing same identity management( KeyStone ). This
>one is preferred proposal. The reason is as following:
>
>  1) One data center crash should not affect other data center's service.
>So make OpenStack services in each data center as independent as
>possible. The only exception for KeyStone is that There is one
>requirement for that "a user should, using a single authentication point
>be able to manage virtual resources spread over multiple OpenStack
>regions." 
>https://gerrit.opnfv.org/gerrit/#/c/1357/6/multisite-identity-service-mana
>gement.rst . Of course, someone may use KeyStone federation to finish
>this purpose, but it's not recommended for dozens of data centers
>inter-federation.
>
>  2) If no image import capability cross Glance is supported, then we
>have to use a 3rd tool to download image from one Glance in DCn, then
>upload the image to Glance in DCm. The image data bits need to be passed
>by the tool, one more data plane bottleneck, and manage this images
>outside Glance, upper layer software like MANO has to deal with
>non-Glance interface.
>
>  3) Use Swift as the shared backend for multiple Glance services in
>different data centers is applicable only in very limited number of data
>centers, can't support dozens of data centers. But for dozens of data
>centers, multiple Glance services with different backend is still an
>issue to address.
>
>It's reasonable to have one solution to address NFV scenario. If you guys
>have other ideas to address multi-data centers image management, please
>share your thoughts in the thread.
>
>Best Regards
>Chaoyi Huang ( Joe Huang )
>
>
>-----Original Message-----
>From: Ian Cordasco [mailto:sigmavirus24 at gmail.com]
>Sent: Saturday, February 20, 2016 3:11 AM
>To: Jay Pipes; OpenStack Development Mailing List (not for usage
>questions)
>Subject: Re: [openstack-dev] [glance]one more use case for Image Import
>Refactor from OPNFV
>
> 
>
>-----Original Message-----
>From: Jay Pipes <jaypipes at gmail.com>
>Reply: OpenStack Development Mailing List (not for usage questions)
><openstack-dev at lists.openstack.org>
>Date: February 19, 2016 at 06:45:38
>To: openstack-dev at lists.openstack.org <openstack-dev at lists.openstack.org>
>Subject:  Re: [openstack-dev] [glance]one more use case for Image Import
>Refactor from OPNFV
>
>> On 02/18/2016 10:29 PM, joehuang wrote:
>> > There is difference between " An end user is able to import image
>> > from another Glance
>> in another OpenStack cloud while sharing same identity management(
>>KeyStone )"
>>  
>> This is an invalid use case, IMO. What's wrong with exporting the
>> image from one OpenStack cloud and importing it to another? What does
>> a shared identity management service have to do with anything?
>
>I have to agree with Jay. I'm not sure I understand the value of adding
>this scenario when what we're concerned with is not clouds uploading to
>other clouds (or importing from other clouds) but instead how a cloud's
>users would import data into Glance.
>
>> > and other use cases. The difference is the image import need to
>> > reuse
>> the token in the source Glance, other ones don't need this.
>>  
>> Again, this use case is not valid, IMO.
>>  
>> I don't care to cater to these kinds of use cases.
>
>I'd like to understand the needs better before dismissing them out of
>hand, but I'm leaning towards agreeing with Jay.
>
>What you might prefer is a way to get something akin to Swift's TempURL
>so you could give that as a location to your other Glance instance. We
>don't support that though and there doesn't seem any use case that we
>would like to support that would necessitate that.
>
>--
>Ian Cordasco
>Glance Core Reviewer
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list