[openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

Aleksandr Didenko adidenko at mirantis.com
Tue Nov 3 09:50:42 UTC 2015


Hi,

let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
or missing something.

We have a set of top-scope manifests (called Fuel puppet tasks) that we use
for OpenStack deployment. We execute those tasks with "puppet apply". Each
task supposed to bring target system into some desired state, so puppet
compiles a catalog and applies it. So basically, puppet catalog = desired
system state.

So we can compile* catalogs for all top-scope manifests in master branch
and store those compiled* catalogs in fuel-library repo. Then for each
proposed patch CI will compare new catalogs with stored ones and print out
the difference if any. This will pretty much show what is going to be
changed in system configuration by proposed patch.

We were discussing such checks before several times, iirc, but we did not
have right tools to implement such thing before. Well, now we do :) I think
it could be quite useful even in non-voting mode.

* By saying compiled catalogs I don't mean actual/real puppet catalogs, I
mean sorted lists of all classes/resources with all parameters that we find
during puppet-rspec tests in our noop test framework, something like
standard puppet-rspec coverage. See example [0] for networks.pp task [1].

Regards,
Alex

[0] http://paste.openstack.org/show/477839/
[1]
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp


On Mon, Nov 2, 2015 at 5:35 PM, Bogdan Dobrelya <bdobrelia at mirantis.com>
wrote:

> Here is a docs update [0] for the patch [1] - which is rather a
> framework - being discussed here.
> Note, that the tool fuel_noop_tests.rb Dmitry Ilyin wrote became a Noop
> testing framework, which is Fuel specific. But the same approach may be
> used for any set of puppet modules and a composition layer manifests
> with a dataset of deployment parameters you may want it to be tracked
> against potential regressions.
>
> I believe we should think about how that Noop testing framework (and
> the deployment data checks under discussion as well) might benefit the
> puppet community.
>
> [1] https://review.openstack.org/240901
> [2] https://review.openstack.org/240015
>
> On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> > Hello.
> > There are few types of a deployment regressions possible. When changing
> > a module version to be used from upstream (or internal module repo), for
> > example from Liberty to Mitaka. Or when changing the composition layer
> > (modular tasks in Fuel). Specifically, adding/removing/changing classes
> > and a class parameters.
> >
> > An example regression for swift deployment data [0]. Something was
> > changed unnoticed by existing noop tests and as a result
> > the swift data became being stored in root partition.
> >
> > Suggested per-commit based regressions detection [1] for deployment data
> > assumes to automatically detect if a class in a noop catalog run has
> > gained or lost a parameter or if it has been updated to another value by
> > a patch under test. Later, this check could even replace existing noop
> > tests, everything will be checked automatically, unless every deployment
> > scenario is covered by a corresponding template, which are represented
> > as YAML files [2] in Fuel.
> > Note: The tool [3] can help to get all deployment cases (-Y) and all
> > deployment tasks (-S) as well.
> >
> > I propose to review the patch [1], understand how it works (see tl;dr
> > section below) and to start using it ASAP. The earlier we commit the
> > "initial" data layer state, less regressions would pop up.
> >
> > (tl;dr)
> > The check should be done for every modular component (aka deployment
> > task). Data generated in the noop catalog run for all classes and
> > defines of a given deployment task should be verified against its
> > "acknowledged" (committed) state.
> > And fail the test gate, if changes has been found, like new parameter
> > with a defined value, removed a parameter, changed a parameter's value.
> >
> > In order to remove a regression, a patch author will have to add (and
> > reviewers should acknowledge) detected changes in the committed state of
> > the deployment data. This may be done manually, with a tool like [3] or
> > by a pre-commit hook, or even at the CI side!
> > The regression check should show the diff between committed state and a
> > new state proposed in a patch. Changed state should be *reviewed* and
> > accepted with a patch, to became a committed one. So the deployment data
> > will evolve with *only* approved changes. And those changes would be
> > very easy to be discovered for each patch under review process!
> > No more regressions, everyone happy.
> >
> > Examples:
> >
> > - A. A patch author removed the mpm_module parameter from the
> > composition layer (apache modular task). The test should fail with a
> >
> > Diff:
> >       @@ -90,7 +90,7 @@
> >          manage_user            => 'true',
> >          max_keepalive_requests => '100',
> >          mod_dir                => '/etc/httpd/conf.d',
> >       -  mpm_module             => 'false',
> >       +  mpm_module             => 'prefork',
> >          name                   => 'Apache',
> >          package_ensure         => 'installed',
> >          ports_file             => '/etc/httpd/conf/ports.conf',
> >
> > It illustrates that the mpm_module's committed value was "false".
> > But the new one came as the 'prefork', likely from the apache class
> > defaults.
> > The solution:
> > Follow the failed build link and see for detected changes (a diff).
> > Acknowledge the changes and include rebuilt templates in the patch as a
> > new revision. The tool [3] (use -h for help) example command:
> > ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> >
> > Or edit the committed templates manually and include data changes in the
> > patch as well.
> >
> > -B. An upstream module author added the new parameter mpm_mode with a
> > default '123'. The test should fail with a
> >
> > Diff:
> >        @@ -90,6 +90,7 @@
> >           manage_user            => 'true',
> >           max_keepalive_requests => '100',
> >           mod_dir                => '/etc/httpd/conf.d',
> >        +  mpm_mode               => '123',
> >           mpm_module             => 'false',
> >           name                   => 'Apache',
> >           package_ensure         => 'installed',
> >
> > It illustrates that the composition layer is not consistent with the
> > upstream module data schema, and that could be a potential regression in
> > deployment (new parameter added upstream and goes with defaults, being
> > ignored by the composition manifest).
> > The solution is the same as for the case A.
> >
> > [0] https://bugs.launchpad.net/fuel/+bug/1508482
> > [1] https://review.openstack.org/240015
> > [2]
> >
> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
> > [3]
> >
> https://review.openstack.org/#/c/240015/7/utils/jenkins/fuel_noop_tests.rb
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151103/94554999/attachment.html>


More information about the OpenStack-dev mailing list