[openstack-dev] Interesting discussion about the multiple backend blueprint implementation

Jérôme Gallard jeronimo974 at gmail.com
Mon Feb 11 17:55:27 UTC 2013


Hi all,

I recently took an interest in the multiple backend blueprint implementation.
I read an interesting discussion last friday, which I summarized here
for the record.
Please let me know It I misunderstood or misrepresented what has been said.

Thanks a lot,
Jérôme

---

Discussion about volume_type / volume_backend (#openstack-cinder)
Friday, 8 of February 2013
Participants : Winston-d_, hub_cap, DuncanT

This synthesis was made from my understanding of the discussion, it
should be taken “as this” and of course does not engage the
participants or me.

Feel free to add any remarks / suggestions.


### Context
In the current implementation of the patch v9 (
https://review.openstack.org/#/c/20347/ ), there is one volume_backend
table which is able to make the link between a volume_type and the
name of a Q where a corresponding backend is listening on.
In that way, the cinder-scheduler is able to send a particular request
(e.g. create_volume) to a specific Q where a particular backend is
listening on.


### Issue
Winston-d_ doesn’t think that it is a good thing to identify backends
with volume_type via a table in a database.
He would like to propose another simpler solution which avoid the use
of a specific “volume_backend” table.
In addition he would like to propose a simpler solution to have
multiple backends in a same host.


### Winston-d_ proposed solution:
** to remove the volume_backend table
Winston-d_ proposes to use the extra specifications of a volume type
to specify the volume_backend, for instance :
1 volume type : “bronze_volume” with extra-specification : key
'volume_backend_name', value, 'LVM_iSCSI'.
One advantage of this solution is that the list of backends is not
visible to the hand user.
It’s possible to add new backends online and have a minimum of (no)
configuration to use them.

** to solve the issue of multiple message Q on the host.
Let’s remember that a volume service subscribes to a message Q with a
TOPIC and a HOST. If two services subscribes to a message Q with the
same TOPIC and HOST, that means they will subscribe to the same
message Q. The problem is that using the same Q for all the backends
belonging to a same host is not a viable option (this is not detailed
here).
In addition one cinder-volume manager is launched for each backend.
That means, if one host holds several backends, several cinder-volume
have to run.
The proposed solution is to change the semantic of HOST: instead of
having a notion of physical machine (get_host_name()), HOST can be
just a string to identify a backend (and from a scheduler point of
view, these backends will run on a different hosts -- even if it's not
really the case).
To change HOST, it’s possible to send host=NEW_NAME to service.Service.create().


### Examples:
** Basic case (simple):
3 LVM backends on 3 hosts (with 3 different Q).
--> easy case: HOST in this context will be different for each host.

** More complicated example:
3 LVM backends on a same host (with 3 different Q).
--> In that case it’s necessary to overload the HOST field (HOST has
to be overloaded because if it’s not the case all the 3 LVM backends
will use the same Q).



More information about the OpenStack-dev mailing list