[Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

Brian Waldon brian.waldon at rackspace.com
Thu Jul 12 17:11:15 UTC 2012


We actually care a hell of a lot about compatibility. We also recognize there are times when we have to sacrifice compatibility so we can move forward at a reasonable pace.

If you think we are handling anything the wrong way, we would love to hear your suggestions. If you just want to make comments like this, I would suggest you keep them to yourself.

Have a great day!
Brian Waldon

On Jul 12, 2012, at 9:32 AM, George Reese wrote:

> This community just doesn't give a rat's ass about compatibility, does it?
> 
> -George
> 
> On Jul 11, 2012, at 10:26 AM, Vishvananda Ishaya wrote:
> 
>> Hello Everyone,
>> 
>> Now that the PPB has decided to promote Cinder to core for the Folsom
>> release, we need to decide what happens to the existing Nova Volume
>> code. As far as I can see it there are two basic strategies. I'm going
>> to give an overview of each here:
>> 
>> Option 1 -- Remove Nova Volume
>> ==============================
>> 
>> Process
>> -------
>> * Remove all nova-volume code from the nova project
>> * Leave the existing nova-volume database upgrades and tables in
>>   place for Folsom to allow for migration
>> * Provide a simple script in cinder to copy data from the nova
>>   database to the cinder database (The schema for the tables in
>>   cinder are equivalent to the current nova tables)
>> * Work with package maintainers to provide a package based upgrade
>>   from nova-volume packages to cinder packages
>> * Remove the db tables immediately after Folsom
>> 
>> Disadvantages
>> -------------
>> * Forces deployments to go through the process of migrating to cinder
>>   if they want to use volumes in the Folsom release
>> 
>> Option 2 -- Deprecate Nova Volume
>> =================================
>> 
>> Process
>> -------
>> * Mark the nova-volume code deprecated but leave it in the project
>>   for the folsom release
>> * Provide a migration path at folsom
>> * Backport bugfixes to nova-volume throughout the G-cycle
>> * Provide a second migration path at G
>> * Package maintainers can decide when to migrate to cinder
>> 
>> Disadvantages
>> -------------
>> * Extra maintenance effort
>> * More confusion about storage in openstack
>> * More complicated upgrade paths need to be supported
>> 
>> Personally I think Option 1 is a much more manageable strategy because
>> the volume code doesn't get a whole lot of attention. I want to keep
>> things simple and clean with one deployment strategy. My opinion is that
>> if we choose option 2 we will be sacrificing significant feature
>> development in G in order to continue to maintain nova-volume for another
>> release.
>> 
>> But we really need to know if this is going to cause major pain to existing
>> deployments out there. If it causes a bad experience for deployers we
>> need to take our medicine and go with option 2. Keep in mind that it
>> shouldn't make any difference to end users whether cinder or nova-volume
>> is being used. The current nova-client can use either one.
>> 
>> Vish
>> 
>> 
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> --
> George Reese - Chief Technology Officer, enStratus
> e: george.reese at enstratus.com    Skype: nspollution    t: @GeorgeReese    p: +1.207.956.0217
> enStratus: Enterprise Cloud Management - @enStratus - http://www.enstratus.com
> To schedule a meeting with me: http://tungle.me/GeorgeReese
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120712/a4b1a9a7/attachment.html>


More information about the Openstack mailing list