[openstack-dev] [cinder] Cinder with remote LVM proposal

Marco Marino marino.mrc at gmail.com
Wed Feb 1 11:57:51 UTC 2017


Hi Erlon, hi Philipp thank you for your answers.
I will try to explain with a real case:

@Erlon: I have 2 nodes that works as a SAN with targetcli and DRBD for a
raid over network. This cluster can be seen as an "ISCSI node".
Furthermore, I have 2 nodes with openstack-cinder-volume (with pacemaker).
Basically I have N iscsi devices here (from the ISCSI node) and on top of
those there is the cinder-volumes VG.
So, when a new volume in cinder is created, a new loigical volume will be
allocated in the cinder-volumes group. Using my solution I can merge the
cinder-volumes VG in the ISCSI node and move (for example) the
openstack-cinder-volume service on the controller because less power is
needed here.
My ideas:
- When I need to upgrade openstack, I can have problems with
drbd/kernel/targetcli packages that I'd like to avoid during an openstack
upgrade.
- WIth this solution I can think to remove the openstack-cinder-volume
dedicated node (in small environments)
- I thought that this can be a support in the open source world for the
people who built a san with drbd + lvm + targetcli
Anyway, I only tried to propose a new idea....

@Philipp: thank you, but at the moment drbd9 has little documentation and
the solution with openstack is really lacking of guides/tutorials/examples
and performance analysis. Actually I'm using drbd 8.4.6 in production and
without a lot of tests and examples I won't switch to the next version.


Thank you
Marco


2017-02-01 12:39 GMT+01:00 Philipp Marek <philipp.marek at linbit.com>:

> Hi everybody,
>
> > > Hi, I'd like to know if it is possible to use openstack-cinder-volume
> with
> > > a remote LVM. This could be a new feature proposal if the idea is good.
> > > More precisely, I'm thinking a solution where openstack-cinder-volume
> runs
> > > on a dedicated node and LVM on another node (called "storage node").
> On
> > > the storage node I need to have a volume group (normally named
> > > cinder-volumes) and the targetcli package, so the iscsi_ip_address in
> > > cinder.conf should be an address associated with the storage node.
> > > Advantages of this solution are: 1) When I need to upgrade openstack I
> can
> > > leave out the storage node from the process (because it has only LVM
> and
> > > targetcli or another daemon used for iscsi targets). 2) Down on the
> > > cinder-volume node cannot creates problems to the iscsi part exposed to
> > > vms. 3) Support to the open source world in cinder: LVM is the common
> > > solution for cinder in low budget environment but performance are good
> if
> > > the storage node is powerful enough
> > >
> > > In my idea, the "interface" between openstack-cinder-volume and lvm
> can be
> > > SSH. Basically we need to create/remove/manage logical volumes on a
> remote
> > > node and the same for the iscsi targets
> > >
> > > Please, let me know if this can be a valid solution.
> > What you are proposing is almost like to create an LVM storage box. I
> > haven't seen any real benefit from the advantages you listed. For 1), the
> > same problems you can have upgrading the services within the same node
> will
> > happen if the LVM services are not in the same host. For, 2), now you
> have
> > 2 nodes to manage instead of 1, which double the changes of having
> > problems. And for 3), I really didn't get the advantage related to the
> > solution you are proposing.
> >
> > If you have real deployments cases where this could help (or if there are
> > other people interested), please list it here so people can see more
> > concrete benefits of using this solution.
>
> please let me suggest to look at the DRBD Cinder driver that's already
> upstream.
> Basically, this allows to use one or more storage boxes, and to export
> their storage (via LV and DRBD) to the Compute nodes.
>
> If you're using the DRBD protocol instead of iSCSI (and configure the DRBD
> Cinder driver to store 2 copies of the data), you'll even benefit from
> redundancy - you can maintain one of the storage nodes while the other is
> serving data, and (as soon as the data is synchronized up again) then do
> your maintenance on the other storage node.
>
>
> See here for more details:
>     http://www.drbd.org/en/doc/users-guide-90/ch-openstack
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170201/fe2f425f/attachment.html>


More information about the OpenStack-dev mailing list