[openstack-dev] [Fuel][Fuel-Modularization] Proposal on Decoupling Serializers from Nailgun
Vladimir Kuklin
vkuklin at mirantis.com
Fri Oct 16 16:21:19 UTC 2015
Hey, Fuelers
TL;DR This email is about how to make
* Intro
I want to bring up one of the important topics on how to make Fuel more
flexible. Some of you know that we have been discussing means of doing this
internally and now it is time to share these thoughts with all of you.
As you could know per Evgeniy Li's message [0] we are looking forward
splitting Fuel (specifically it's Fuel-Web) part into set of microservices
each one serving their own purpose like networking configuration,
partitioning, etc.
And while we are working on this it seems that we need to get rid of
so-called Nailgun serializers that are put too close to business logic
engine, that have a lot of duplicating attributes; you are not able to
easily modify or extend them; you are not able to change their behaviour
even when Fuel Library is capable of doing so - everything is hardcoded in
Nailgun code without clear separation between business logic and actual
deployment workflow data generation and orchestration.
Let me give you an example:
* Case A. Replace Linux bridges with OVS bridges by default
We all know that we removed OVS as much as possible from our reference
architecture due to its buginess. Imagine a situation when someone
magically fixed OVS and wants to use it as a provider for generic bonds and
bridge. It actually means that he needs to set default provider in
network_scheme for l23network puppet module to 'ovs' instead of 'lnx'.
Imagine, he has put this magical OVS into a package and created a plugin.
The problem here will be that he needs to override what network serializer
is sending to the nodes.
But the problem here is that he cannot do it without editing Nailgun code
or override this serializer in any way.
* Case B. Make Swift Partitions Known to Fuel Library
Imagine, you altered the way you partition your disk in Nailgun. You
created a special role for swift disks which should occupy the whole disk.
In this case you should be able to get this info from api and feed it to
swift deployment task. But it is not so easy - this stuff is still
hardcoded in deployment serializers like {mp} field of nodes array of
hashes.
* Proposed solution
In order to tackle this I propose to extract these so called serializers
(see links [1] and [2]) and put them closer to library. You can see that
half of the code is actually duplicated for deployment and provsioning
serializers and there is actually no inheritance of common code betwen
them. If you want to introduce new attribute and put it into astute.yaml,
you will need to rewrite Nailgun code. This is not very
deployment/sysop/sysadmin engineer-friendly. Essentially, the proposal is
to introduce a library of such `serializers` (I would like to call them
translators actually) which could leverage inheritance, polymorphism and
incapsulation pretty much in OOP mode but with ability for deployment
engineers to apply versioning to serializers and allow each particular task
to work with different sources of data with different versions of API.
What this actually means: each task has a step called 'translation' which
fetches attributes from any arbitrary set of sources and converts them into
the format that is consumable by the deployment stage of this task. From
our current architectural point of view it will look like generation of a
set of yaml files that will be merged by hiera so that each puppet task can
leverage the power of hiera.
This actually means that in scope of our modularization initiative each
module should have an API which will be accessed by those tasks in runtime
right before the tasks are executed. This also means that if a user changes
some of the values in the databases of those modules, rerun of such task
will lead to a different result of 'translation' and trigger some actions
like 'keystone_config ~> Service[keystone]' in puppet.
There is a tough discussion (etherpad here:[4]) on:
1) how to handle versioning/revert capabilities
2) where to store output produced by those 'translators'
3) which type of the storage to use
Please, feel free to provide your feedback on this approach and tell me
where this approach is going to be wrong.
[0] http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/66563
[1]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/deployment_serializers.py
[2]
https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/provisioning_serializers.py
[3] https://github.com/xenolog/l23network
[4] https://etherpad.openstack.org/p/data-processor-per-component
--
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151016/382c5072/attachment.html>
More information about the OpenStack-dev
mailing list