[openstack-dev] [Heat] A prototype for cross-vm synchronization and communication
Stan Lagun
slagun at mirantis.com
Mon Oct 21 14:03:58 UTC 2013
Hi Lakshminarayanan,
Seems like a solid plan.
I'm probably wrong here but ain't this too tied to chef? I believe the
solution should equally be suitable for chef, puppet, SaltStack, Murano, or
maybe all I need is just a plain bash script execution. It may be difficult
to intercept script reads the way it is possible with chef's node[][]. In
Murano we has a generic agent that could integrate all such deployment
platforms using common syntax. Agent specification can be found here:
https://wiki.openstack.org/wiki/Murano/UnifiedAgent and it can be helpful
or at least can be a source for design ideas.
I'm very positive on adoption on such solution to Heat. There would be a
significant amount of work to abstract all underlying technologies (chef,
Zookeper etc) so that they become pluggable and replaceable without
introducing hard-coded dependencies for the Heat and bringing everything to
production quality level. We could collaborate on bringing such solution to
the Heat if it would be accepted by Heat's core team and community
On Fri, Oct 18, 2013 at 10:45 PM, Lakshminaraya Renganarayana <
lrengan at us.ibm.com> wrote:
> Hi,
>
> In the last Openstack Heat meeting there was good interest in proposals
> for cross-vm synchronization and communication and I had mentioned the
> prototype I have built. I had also promised that I will post an outline of
> the prototype ... Here it is. I might have missed some details, please feel
> free to ask / comment and I would be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and communication
> using high-level declarative description (no wait-conditions) Use chef as
> the CM tool.
>
> Design rationale / choices of the prototype (note that these were made
> just for the prototype and I am not proposing them to be the choices for
> Heat/HOT):
>
> D1: No new construct in Heat template
> => use metadata sections
> D2: No extensions to core Heat engine
> => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
> => use a convention of accessing inputs/outputs from chef node[][]
> => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
> => use zookeeper as a coordinator and as a global data space for
> communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config and
> dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and produces
> another heat template with user-data sections with cloud-init scripts and
> also sets up a zookeeper instance with enough information to coordinate
> between the resources at runtime to realize the dependences and
> synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to deploy.
> After the VMs are created the cloud-init script kicks in. The cloud init
> script installs chef solo and then starts the execution of the roles
> specified in the metadata section. During this execution of the recipes the
> coordination is realized (see steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe (see attached
> example)
> - a list of roles
> - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
> - convention: these inputs/outputs will be through chef node attrs
> node[][]
>
> S2. Dependence analysis and cloud init script generation
>
> Dependence analysis:
> - resolve every reference that can be statically resolved using Heat's
> fucntions (this step just uses Heat's current dependence analysis -- Thanks
> to Zane Bitter for helping me understand this)
> - flag all unresolved references as values resolved at run-time at
> communicated via the coordinator
>
> Use cloud-init in user-data sections:
> - automatically generate a script that would bootstrap chef and will run
> the roles/recipes in the order specified in the metadata section
> - generate dependence info for zookeeper to coordinate at runtime
>
> S3. Coordinate synchronization and communication at run-time
> - intercept reads and writes to node[][]
> - if it is a remote read, get it from Zookeeper
> - execution will block till the value is available
> - if write is for a value required by a remote resource, write the value
> to Zookeeper
>
> The prototype is implemented in Python and Ruby is used for chef
> interception.
>
> There are alternatives for many of the choices I have made for the
> prototype:
> - zookeeper can be replaced with any other service that provides a data
> space and distributed coordination
> - chef can be replaced by any other CM tool (a little bit of design /
> convention needed for other CM tools because of the interception used in
> the prototype to catch reads/writes to node[][])
> - the whole dependence analysis can be integrated into the Heat's
> dependence analyzer
> - the component construct proposed recently (by Steve Baker) for HOT/Heat
> can be used to specify much of what is specified using the metadata
> sections in this prototype.
>
> I am interested in using my experience with this prototype to contribute
> to HOT/Heat's cross-vm synchronization and communication design and code.
> I look forward to your comments.
>
> Thanks,
> LN
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
slagun at mirantis.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131021/52a016a5/attachment.html>
More information about the OpenStack-dev
mailing list