[openstack-dev] BP discussion: Attach a single volume to a cluster (multiple hosts)
lzy.dev at gmail.com
lzy.dev at gmail.com
Tue May 14 09:14:59 UTC 2013
Kiran,
Not sure you have see this
"https://blueprints.launchpad.net/cinder/+spec/shared-volume" before?
I'm not sure what's the different between it and your
"https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume"
proposal. Can you clear that?
Thanks,
Zhi Yan
On Mon, May 13, 2013 at 5:20 PM, lzy.dev at gmail.com <lzy.dev at gmail.com> wrote:
> Hi Kiran,
>
> I have a question about R/O volume support for Cinder, I have also
> added it into https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
>
> I saw one comment be wrote there is: "Summit feedback: Not doing R/O
> volumes due to the limited hypervisor that can support setting the
> volume to R/O, currently only KVM has this capability".
> But I consider the R/O volumes support is valueable and can be
> implemented gracefully. I have a plan to implement a Cinder driver for
> Glance (https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver),
> under that case the R/O volume whitin Cinder backend will be created
> and stored as an image. And base on the particular COW mechanism
> (depend on Cinder backend store driver) on Nova side, the R/W image
> (R/O volume based image + delta) can be used for the instance
> normally.
>
> IMO, it's related with your this idea/design, any input?
>
> Thanks,
> Zhi Yan
>
> On Mon, May 13, 2013 at 1:50 PM, Vaddi, Kiran Kumar
> <kiran-kumar.vaddi at hp.com> wrote:
>> Hello All,
>>
>> For the use case of attaching a volume created using cinder to an
>> instance that is created in a compute node that represents a *cluster*
>> requires
>>
>> 1. The volume to be presented to all the hosts in the cluster
>>
>> 2. The nova driver to be aware of the target LUN information as
>> seen for each host and make the volume attach to the instance.
>>
>>
>> In today's implementation of nova and cinder interfaces for attaching a
>> volume using the /nova volume-attach/, the control/data flow between the
>> two components is done based on the assumption that the compute node
>> consists of only one host. However, some changes need to be done to
>> handle clusters represented as a compute node. The problem was brought
>> up for discussion in the Havana summit with the Cinder team and they
>> were of the opinion that the problem can be resolved by the following
>> approach
>>
>> 1. The nova manager will call the driver's get_volume_connector
>> method to obtain the connector information consisting of the compute
>> node's HBA information.
>>
>> 2. The driver that manages the cluster will now return the details
>> of all the hosts in the cluster. This is returned in an additional key
>> of the dict called clustered_hosts. The value will have the connector
>> information for each host.
>>
>> 3. /[CHANGE proposed]/The nova manager will now be changed to be
>> aware of the new key clustered_hosts and then iterate through each one
>> and call the cinder APIs initialize_connection. Cinder drivers response
>> to this will remain as it is today, i.e present the volume to the host
>> and return the target information for the volume.
>>
>> 4. The nova manager will pass the collected target information
>> back to the nova driver to attach the volume to the instance as a new
>> key. The driver will now be aware of the new key and perform the volume
>> attach to the instance.
>>
>>
>>
>> Let me know if the above approach is acceptable? Please add the
>> necessary nova core team members for reviewing this approach so that we
>> can discuss and finalize on the approach.
>>
>>
>> The blueprint link that has additional details. Here is the link:
>> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
>>
>>
>> The initial approach that was discussed in the Havana summit was with
>> cinder doing the looping for each host (steps below). However, cinder
>> team favored the solution where there would be minimal/no change in
>> cinder but yet achieve the same result with the existing interfaces/data.
>>
>>
>> 1. The nova manager will call the driver's get_volume_connector
>> method to obtain the connector information consisting of the compute
>> node's HBA information.
>>
>> 2. The driver that manages the cluster will now return the details
>> of all the hosts in the cluster. This is returned in an additional key
>> of the dict called clustered_hosts. The value will have the connector
>> information for each host.
>>
>> 3. The nova manager will call the cinder APIs initialize_connection.
>>
>> 4. /[CHANGE proposed initially]/Cinder drivers will be aware of
>> the new key clustered_hosts and cinder will perform the loop for each
>> host and
>>
>> a. present the volume to the host and collect the target
>> information for the volume.
>>
>> b. Collect target information of all hosts and send it back to nova
>> in a new key
>>
>> 5. The nova manager will pass the collected target information
>> back to the nova driver to attach the volume to the instance as a new
>> key. The driver will now be aware of the new key and perform the volume
>> attach to the instance.
>>
>>
>> Thanks,
>> Kiran
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list