[openstack-dev] [Nova][Cinder] Questions re progress

Billy Olsen billy.olsen at gmail.com
Thu Mar 19 00:03:04 UTC 2015


Specifically to the point of Swift backend for Cinder...

>From my understanding Swift was never intending to provide block-device
abstractions the way that Ceph does. That's not to say that it couldn't,
but it doesn't today.

I wonder if you might be targeting the wrong audience by going to the
Cinder community for the Swift backed volume support in Cinder. Since
Cinder is not in the datapath it cannot provide the block level
abstractions necessary for Swift objects to be treated as block devices.

If you're really interested in this, you might want to reach out to the
Swift community to see if there is an interest in adding block support.
After some set of block device abstraction is available for Swift then a
driver can be written for Cinder which exposes the block abstractions.

- Billy


On Wed, Mar 18, 2015 at 4:43 PM John Griffith <john.griffith8 at gmail.com>
wrote:

> On Wed, Mar 18, 2015 at 12:25 PM, Adam Lawson <alawson at aqorn.com> wrote:
>
>> The aim is cloud storage that isn't affected by a host failure and major
>> players who deploy hyper-scaling clouds architect them to prevent that from
>> happening. To me that's cloud 101. Physical machine goes down, data
>> disappears, VM's using it fail and folks scratch their head and ask this
>> was in the cloud right? That's the indication of a service failure, not a
>> feature.
>>
>> Yeah, the idea of an auto-evacuate is def nice, and I know there's
> progress there just maybe not as far along as some would like.  I'm far
> from a domain expert there though so I can't say much, other than I keep
> beating the drum that that doesn't require shared storage.
>
> Also, I would argue depending on who you ask, cloud 101 actually says;
> "The Instance puked, auto-spin up another one and get on wit it".  I'm
> certainly not arguing your points, just noting their are multiple views on
> this.  Also.
>>
>
>>
>> I'm just a very big proponent of cloud arch that provides a seamless
>> abstraction between the service and the hardware. Ceph and DRDB are decent
>> enough. But tying data access to a single host by design is a mistake IMHO
>> so I'm asking why we do things the way we do and whether that's the way
>> it's always going to be.
>>
>
> ​So others have/will chime in here... one thing I think is kinda missing
> in the statement above is the "single host", that's actually the whole
> point of Ceph and other vendor driven clustered storage technologies out
> there.  There's a ton to choose from at this point, open source as well as
> proprietary and a lot of them are really really good.  This is also very
> much what DRBD aims to solve for you.  You're not tying data access to a
> single host/node, that's kinda the whole point.
>
> Granted in the case of DRBD we've still got a ways to go and something we
> haven't even scratched the surface on much is virtual/shared IP's for
> targets but we're getting there albeit slowly (there are folks who are
> doing this already but haven't contributed their work back upstream), so in
> that case yes we still have a shortcoming in that if the node that's acting
> as your target server goes down you're kinda hosed.  ​
>
>
>>
>> Of course this bumps into the question whether all apps hosted in the
>> cloud should be cloud aware or whether the cloud should have some tolerance
>> for legacy apps that are not written that way.
>>
>
> ​I've always felt "it depends".  I think you should be able to do both
> honestly (and IMHO you can currently), but if you want to take full
> advantage of everything that's offered in an OpenStack context at least,
> the best way to do that is to design and build with failure and dynamic
> provisioning in mind.​
>
>
>>
>>
>>
>> *Adam Lawson*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>>
> ​Just my 2 cents, hope it's helpful.
>
> John​
>
>
>>
>> On Wed, Mar 18, 2015 at 10:59 AM, Duncan Thomas <duncan.thomas at gmail.com>
>> wrote:
>>
>>> I'm not sure of any particular benefit to trying to run cinder volumes
>>> over swift, and I'm a little confused by the aim - you'd do better to use
>>> something closer to purpose designed for the job if you want software fault
>>> tolerant block storage - ceph and drdb are the two open-source options I
>>> know of.
>>>
>>> On 18 March 2015 at 19:40, Adam Lawson <alawson at aqorn.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> Got some questions for whether certain use cases have been addressed
>>>> and if so, where things are at. A few things I find particularly
>>>> interesting:
>>>>
>>>>    - Automatic Nova evacuation for VM's using shared storage
>>>>    - Using Swift as a back-end for Cinder
>>>>
>>>> I know we discussed Nova evacuate last year with some dialog leading
>>>> into the Paris Operator Summit and there were valid unknowns around what
>>>> would be required to constitute a host being "down", by what logic that
>>>> would be calculated and what would be required to initiate the move and
>>>> which project should own the code to make it happen. Just wondering where
>>>> we are with that.
>>>>
>>>> On a separate note, Ceph has the ability to act as a back-end for
>>>> Cinder, Swift does not. Perhaps there are performance trade-offs to
>>>> consider but I'm a big fan of service plane abstraction and what I'm not a
>>>> fan of is tying data to physical hardware. The fact this continues to be
>>>> the case with Cinder troubles me.
>>>>
>>>> So a question; are these being addressed somewhere in some context? I
>>>> admittedly don't want to distract momentum on the Nova/Cinder teams, but I
>>>> am curious if these exist (or conflict) with our current infrastructure
>>>> blueprints?
>>>>
>>>> Mahalo,
>>>> Adam
>>>>
>>>> *Adam Lawson*
>>>>
>>>> AQORN, Inc.
>>>> 427 North Tatnall Street
>>>> Ste. 58461
>>>> Wilmington, Delaware 19801-2230
>>>> Toll-free: (844) 4-AQORN-NOW ext. 101
>>>> International: +1 302-387-4660
>>>> Direct: +1 916-246-2072
>>>>
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Duncan Thomas
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ____________________________________________________________
> ______________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150319/60fc5240/attachment.html>


More information about the OpenStack-dev mailing list