[openstack-dev] [sahara] Upgrade of Hadoop components inside released version
ebergenholtz at hortonworks.com
Wed Jun 25 19:41:07 UTC 2014
Please see in-line for my thoughts/opinions on the topic:
>> From: Andrew Lazarev <alazarev at mirantis.com>
>> Subject: [openstack-dev] [sahara] Upgrade of Hadoop components inside released version
>> Date: June 24, 2014 at 5:20:27 PM EDT
>> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
>> Reply-To: "OpenStack Development Mailing List \(not for usage questions\)" <openstack-dev at lists.openstack.org>
>> Hi Team,
>> I want to raise topic about upgrade of components in Hadoop version that is already supported by released Sahara plugin. The question is raised because of several change requests  and . Topic was discussed in Atlanta (), but we didn't come to the decision.
Any future policy that is put in place must provide the ability for a plugin to move forward in terms of functionality. Each plugin, depending on its implementation is going to have limitations, sometimes with backwards compatibility. This is not a function of Sahara proper, but possibly of Hadoop and or the distribution in question that the plugin implements. Each vendor/plugin should be allowed to control what they do or do not support.
With regards to the code submissions that are being delayed by lack of backwards compatibility policy ( ), it is my opinion that they should be allowed to move forward as there is no policy in place that is being challenged and/or violated. However, these code submission serve as a good vehicle for discussing said compatibility policy.
>> All of us agreed that existing clusters must continue to work after OpenStack upgrade. So if user creates cluster by Icehouse Sahara and then upgrades OpenStack - everything should continue working as before. The most tricky operation is scaling and it dictates list of restrictions over new version of component:
>> 1. <plugin>-<version> pair supported by the plugin must not change
>> 2. if component upgrade requires DIB involved then plugin must work with both versions of image - old and new one
>> 3. cluster with mixed nodes (created by old code and by new one) should still be operational
>> Given that we should choose policy for components upgrade. Here are several options:
>> 1. Prohibit components upgrade in released versions of plugin. Change plugin version even if hadoop version didn't change. This solves all listed problems but a little bit frustrating for user. They will need to recreate all clusters they have and migrate data like as it is hadoop upgrade. They should also consider Hadoop upgrade to do migration only once.
Re-creating a cluster just because the version of a plugin (or Sahara) has changed is very unlikely to occur in the real world as this could easily involve 1,000’s of nodes and many petabytes of data. There must be a more compelling reason to recreate a cluster than plugin/sahara has changed. What’s more likely is that cluster that is provisioned which is rendered incompatible with a future version of a plugin will result in an administrator making use of the ‘native’ management capabilities provided by the Hadoop distribution; in the case of HDP, this would be Ambari. Clusters can be completely managed through Ambari, including migration, scaling etc. It’s only the VM resources that are not managed by Ambari, but this is a relatively simple proposition.
>> 2. Disable some operations over cluster created by the previous version. If users don't have option to scale cluster there will be no problems with mixed nodes. For this option Sahara need to know if the cluster was created by this version or not.
If for some reason a change is introduced in a plugin that renders it incompatible across either Hadoop OR OpenStack versions, it should still be possible to make such change in favor of moving the state of the art forward. Such incompatibility may be difficult (read expensive) or impossible to avoid. The requirement should be to specify the upgrade/migration support (through documentation) specifically with respect to scaling.
>> 3. Require change author to perform all kind of tests and prove that mixed cluster works as good and not mixed. In such case we need some list of tests that are enough to cover all corner cases.
My opinion is that testing and backwards compatibility is ultimately the responsibility of the plugin. As such, the plugin vendor should not be restricted in terms of what it needs/must do, but indicate through documentation what its capabilities are to set expectations with customers/users.
>> Ideas are welcome.
>>  https://review.openstack.org/#/c/98260/
>>  https://review.openstack.org/#/c/87723/
>>  https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev