[Openstack-operators] [cinder] Thoughts on cinder readiness

Sean McGinnis sean.mcginnis at gmx.com
Thu Jun 1 13:49:45 UTC 2017


On Wed, May 31, 2017 at 09:06:35PM -0700, Joshua Harlow wrote:
> Erik McCormick wrote:
> >I've been running Ceph-backed Cinder since, I think, Icehouse. It's
> >really more of a function of your backend or the hypervisor than Cinder
> >itself. That being said, it's been probabky mt smallest Openstack pain
> >point iver the years.
> >
> >I can't imagine what sort of concurrency issues you'd run into short of
> >a large public cloud given that it really doesn't do much once
> >provisioning a volume is complete. Maybe if you've got people taking a
> >ton of snapshots? What sort of specific issues are you concerned about?
> >
> 
> Mainly the ones that spawned articles/specs like:
> 
> https://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/
> 
> https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cinder-volume-active-active-support.html
> 
> And a few more like those, I'm especially not going to be a big fan of
> having to (as a person, myself or others on the godaddy team) go in and muck
> with volumes in stuck states and so-on (similar issues occur in nova, which
> just drain the blood out of humans that have to go fix them).
> 

Hey Josh,

As those indicate, that is part of an effort to add native Active/Active
HA to Cinder services. That is mostly implemented as of Ocata, with some
additional work now needed for each backend driver to get tested and
validated that there are not unforeseen concurrency issues with talking
to the backend storage management interface from multiple services.

If you are not looking at that HA work (which you can't really with Liberty)
there's not too much risk. We added tooz in preparation for HA, but it
currently defaults to local filesystem locks for those places in the code
that required locking. So no additional configuration needed if you are
just running a "normal" Cinder deployment.

There actually was one special case, now that I think of it. Both Nova and
Cinder use the os-brick library for local volume management operations.
Only in the case where you are running both Nova and Cinder on the same
node, if there are Nova attaches going on at the same time as Cinder
copying an image to a new volume, for instance, you need to make sure the
lock directory is the same for both services due to some overlap in operations.

https://review.openstack.org/#/c/354501/

Again, this is only if running both services on the same node.

Sean

> >-Erik
> >
> >On May 31, 2017 8:30 PM, "Mike Lowe" <jomlowe at iu.edu
> ><mailto:jomlowe at iu.edu>> wrote:
> >
> >    We have run ceph backed cinder from Liberty through Newton, with the
> >    exception of a libvirt 2.x bug that should now be fixed, cinder
> >    really hasn't caused us any problems.
> >
> >    Sent from my iPad
> >
> >     > On May 31, 2017, at 6:12 PM, Joshua Harlow <harlowja at fastmail.com
> >    <mailto:harlowja at fastmail.com>> wrote:
> >     >
> >     > Hi folks,
> >     >
> >     > So I was having some back and forth internally about is cinder
> >    ready for usage and wanted to get other operators thoughts on how
> >    there cinder experiences have been going, any trials and tribulations.
> >     >
> >     > For context, we are running on liberty (yes I know, working on
> >    getting that to newer versions) and folks in godaddy are starting to
> >    use more and more cinder (backed by ceph) and that got me thinking
> >    about asking the question from operators (and devs) on what kind of
> >    readiness 'rating' (or whatever you would want to call it) would
> >    people give cinder in liberty.
> >     >
> >     > Some things that I was thinking was around concurrency rates,
> >    because I know that's be a common issue that the cinder developers
> >    have been working through (using tooz, and various other lock
> >    mechanisms and such).
> >     >
> >     > Have other cinder operators seen concurrent operations (or
> >    conflicting operations or ...) work better in newer releases (is
> >    there any metric/s anyone has gathered about how things have gotten
> >    worse/better under scale for cinder in various releases? partically
> >    with regard to using ceph).
> >     >
> >     > Thoughts?
> >     >
> >     > It'd be interesting to capture (not just for my own usage) I
> >    think because such info helps the overall user and operator and dev
> >    community (and yes I would expect various etherpads to have parts of
> >    this information, but it'd be nice to have like a single place where
> >    other operators can specify how ready they believe a project is for
> >    a given release and for a given configuration; and ideally provide
> >    details/comments as to why they believe this).
> >     >
> >     > -Josh
> >     >
> >     >
> >     >
> >     >
> >     > _______________________________________________
> >     > OpenStack-operators mailing list
> >     > OpenStack-operators at lists.openstack.org
> >    <mailto:OpenStack-operators at lists.openstack.org>
> >     >
> >    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
> >
> >    _______________________________________________
> >    OpenStack-operators mailing list
> >    OpenStack-operators at lists.openstack.org
> >    <mailto:OpenStack-operators at lists.openstack.org>
> >    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
> >
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



More information about the OpenStack-operators mailing list