[openstack-dev] [Fuel][Fuel-Modularization] Proposal on Decoupling Serializers from Nailgun
Vladimir Kuklin
vkuklin at mirantis.com
Thu Oct 22 11:16:56 UTC 2015
>
> Each task can use some code to transform this output to the
> representation that is actually needed for this particular task. Whenever a
> task transforms this data it can access API and do version negotiation, for
> example. Each time this transformation is performed this task can return
> the data to some storage that will save this data for sake of control and
> troubleshooting, such as, for example, user can always see which changes
> are going to be applied and decide what to do next.
>
> Also, this means that the process of data calculation itself is very
> 'lazy' or 'delayed', i. e. the data itself is calculated right at the
> beginning of deployment transaction, so that it is not locked to some
> particular details of deployment engine data processing and not prone to
> issues like 'oh, I cannot get VIP because it has not been allocated yet by
> Nailgun/oh, I cannot set it because it has already been set by Nailgun and
> there is no way to alter it'.
>
>> To me, the two paragraphs above a contradictory. If the data
calculations are lazy, I don't really see how one can introspect into
changes that will be applied by a component at any given run. You just >>
don't have this information, and you need to calculate it anyways to see
which settings will be passed to a component. Might be I got your point
wrong here. Please correct me if this is the case.
Oleg, I actually meant that we do it in the following stages:
1) Change stuff in any amount of business logic engines you want,
configuration databases, wikipedia, 4chan, etc.
2) Schedule a transaction of deployment
3) Make 'transformers/serializers' for each of the task collect all the
data and store them before execution is started
4) Allow user to compare differences and decide whether he actually wants
to apply this change
5) Commit the deployment - run particular tasks with particular set of
settings which are staged and frozen (otherwise it will be impossible to
debug this stuff)
6) If there is lack of data for some task, e.g. you need some entitties to
be created during the deployment so that other task will use their output
or side-effects to calculate things - this task should not be executed
within this transaction. This means that the whole deployment should be
splitted into 2 transactions. I can mention an old story here - when we
were running puppet we needed to create some stuff for neutron knowing ID
of the network that had been created by another resource 5 seconds earlier.
But we could not do this because puppet 'freezes' the input provided with
"facts" before this transaction runs. This is exactly the same usecase.
So these 6 items actually mean:
1) Clear separation between layers of the system and their functional
boundaries
2) Minimum of cross-dependencies between component data - e.g. deployment
tasks should not ever produce anything that is then stored in the storage.
Instead, you should have an API that provides you with data which is the
result of deployment run. E.g. if you need to create a user in LDAP and you
need this user's ID for some reason, your deployment task should create
this user and, instead of returning this output to the storage, you just
run another transaction and the task that requires this ID fetches it from
LDAP.
On Thu, Oct 22, 2015 at 1:25 PM, Dmitriy Shulyak <dshulyak at mirantis.com>
wrote:
>
> Hi Oleg,
>
> I want to mention that we are using similar approach for deployment
> engine, the difference is that we are working not with components, but with
> deployment objects (it could be resources or tasks).
> Right now all the data should be provided by user, but we are going to add
> concept of managed resource, so that resource will be able to request data
> from 3rd party service before execution, or by notification, if it is
> supported.
> I think this is similar to what Vladimir describes.
>
> As for the components - i see how it can be useful, for example
> provisioning service will require data from networking service, but i think
> nailgun can act as router for such cases.
> This way we will keep components simple and purely functional, and nailgun
> will perform a role of a client which knows how to build interaction
> between components.
>
> So, as a summary i think this is 2 different problems.
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com <http://www.mirantis.ru/>
www.mirantis.ru
vkuklin at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151022/57d3dc8c/attachment.html>
More information about the OpenStack-dev
mailing list