[openstack-dev] [Heat] A prototype for cross-vm synchronization and communication
Lakshminaraya Renganarayana
lrengan at us.ibm.com
Fri Oct 18 18:57:43 UTC 2013
Just wanted to add a couple of clarifications:
1. the cross-vm dependences are captured via the read/writes of attributes
in resources and in software components (described in metadata sections).
2. these dependences are then realized via blocking-reads and writes to
zookeeper, which realizes the cross-vm synchronization and communication of
values between the resources.
Thanks,
LN
Lakshminaraya Renganarayana/Watson/IBM at IBMUS wrote on 10/18/2013 02:45:01
PM:
> From: Lakshminaraya Renganarayana/Watson/IBM at IBMUS
> To: OpenStack Development Mailing List
<openstack-dev at lists.openstack.org>
> Date: 10/18/2013 02:48 PM
> Subject: [openstack-dev] [Heat] A prototype for cross-vm
> synchronization and communication
>
> Hi,
>
> In the last Openstack Heat meeting there was good interest in
> proposals for cross-vm synchronization and communication and I had
> mentioned the prototype I have built. I had also promised that I
> will post an outline of the prototype ... Here it is. I might have
> missed some details, please feel free to ask / comment and I would
> be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and
> communication using high-level declarative description (no wait-
> conditions) Use chef as the CM tool.
>
> Design rationale / choices of the prototype (note that these were
> made just for the prototype and I am not proposing them to be the
> choices for Heat/HOT):
>
> D1: No new construct in Heat template
> => use metadata sections
> D2: No extensions to core Heat engine
> => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
> => use a convention of accessing inputs/outputs from chef node[][]
> => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
> => use zookeeper as a coordinator and as a global data space for
communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config
> and dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and
> produces another heat template with user-data sections with cloud-
> init scripts and also sets up a zookeeper instance with enough
> information to coordinate between the resources at runtime to
> realize the dependences and synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to
> deploy. After the VMs are created the cloud-init script kicks in.
> The cloud init script installs chef solo and then starts the
> execution of the roles specified in the metadata section. During
> this execution of the recipes the coordination is realized (see
> steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe (see attached
example)
> - a list of roles
> - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
> - convention: these inputs/outputs will be through chef node attrs node
[][]
>
> S2. Dependence analysis and cloud init script generation
>
> Dependence analysis:
> - resolve every reference that can be statically resolved using
> Heat's fucntions (this step just uses Heat's current dependence
> analysis -- Thanks to Zane Bitter for helping me understand this)
> - flag all unresolved references as values resolved at run-time at
> communicated via the coordinator
>
> Use cloud-init in user-data sections:
> - automatically generate a script that would bootstrap chef and will
> run the roles/recipes in the order specified in the metadata section
> - generate dependence info for zookeeper to coordinate at runtime
>
> S3. Coordinate synchronization and communication at run-time
> - intercept reads and writes to node[][]
> - if it is a remote read, get it from Zookeeper
> - execution will block till the value is available
> - if write is for a value required by a remote resource, write the
> value to Zookeeper
>
> The prototype is implemented in Python and Ruby is used for chef
> interception.
>
> There are alternatives for many of the choices I have made for the
prototype:
> - zookeeper can be replaced with any other service that provides a
> data space and distributed coordination
> - chef can be replaced by any other CM tool (a little bit of design
> / convention needed for other CM tools because of the interception
> used in the prototype to catch reads/writes to node[][])
> - the whole dependence analysis can be integrated into the Heat's
> dependence analyzer
> - the component construct proposed recently (by Steve Baker) for
> HOT/Heat can be used to specify much of what is specified using the
> metadata sections in this prototype.
>
> I am interested in using my experience with this prototype to
> contribute to HOT/Heat's cross-vm synchronization and communication
> design and code. I look forward to your comments.
>
> Thanks,
> LN_______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131018/f8123b79/attachment.html>
More information about the OpenStack-dev
mailing list