<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">
When the OpenStack project was started in 2010, we conceived of two languages that would be considered to have first class status: Python, and C++. The idea is that Python would be used for the API services, and that C++ would be used in special cases where
Python was not a good fit, such as for ultra-high performance, or kernel drivers, or for memory constrained situations.
<div><br>
<div>Although the Python language preference has prevailed, we should not be allergic to the idea of an agent being done in C++ if it means that there are end-user benefits that justify it. I think that having a modular agent that can be easily extended that
has a very small resource footprint is wise. Key issues for a "base" agent are:</div>
<div><br>
</div>
<div>1) A way to sign the distributed bits so users can detect/prevent tampering.</div>
<div>2) Ways to extend the agent using flexible, well documented extension APIs. </div>
<div>3) A way to securely issue remote commands to the agent (to be serviced in accordance with registered commands).</div>
<div>4) A way to update the agent in-place, initiated by a remote signal (with an option to disable).</div>
<div><br>
</div>
<div>Whether standard AMQP protocol is used for messaging is besides the point, and should be discussed as an implementation detail. I see no reason why C++ could not be used to implement a low memory footprint agent that could offer the functionality I outlined
above. Perhaps one of the extension api's is a shell exec with standard IO connected to the parent process. That way you could easily extend it using Python, or whatever you want (existing configuration management tools, etc.)</div>
<div><br>
</div>
<div>Adrian<br>
<div><br>
<div>
<div>
<div>On Dec 19, 2013, at 7:51 AM, Dmitry Mescheryakov <<a href="mailto:dmescheryakov@mirantis.com">dmescheryakov@mirantis.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">2013/12/19 Fox, Kevin M <span dir="ltr"><<a href="mailto:kevin.fox@pnnl.gov" target="_blank">kevin.fox@pnnl.gov</a>></span><br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
How about a different approach then... OpenStack has thus far been very successful providing an API and plugins for dealing with things that cloud providers need to be able to switch out to suit their needs.<br>
<br>
There seems to be two different parts to the unified agent issue:<br>
* How to get rpc messages to/from the VM from the thing needing to control it.<br>
* How to write a plugin to go from a generic rpc mechanism, to doing something useful in the vm.<br>
<br>
How about standardising what a plugin looks like, "python api, c++ api, etc". It won't have to deal with transport at all.<br>
<br>
Also standardize the api the controller uses to talk to the system, rest or amqp.<br>
</blockquote>
<div><br>
</div>
<div>
<div>I think that is what we discussed when we tried to select between Salt + oslo.messaging and pure oslo.messaging</div>
<div>framework for the agent. As you can see, we didn't came to agreement so far :-) Also Clint started a new thread to discuss what, I believe, you defined as the first part of unified agent issue. For clarity, the thread I am referring to is </div>
<div><br>
</div>
<div><a href="http://lists.openstack.org/pipermail/openstack-dev/2013-December/022690.html">http://lists.openstack.org/pipermail/openstack-dev/2013-December/022690.html</a> </div>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Then the mechanism is an implementation detail. If rackspace wants to do a VM serial driver, thats cool. If you want to use the network, that works too. Savanna/Trove/etc don't have to care which mechanism is used, only the cloud provider.</blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Its not quite as good as one and only one implementation to rule them all, but would allow providers to choose what's best for their situation and get as much code shared as can be.<br>
<br>
What do you think?<br>
<br>
Thanks,<br>
Kevin<br>
<br>
<br>
<br>
<br>
________________________________________<br>
From: Tim Simpson [<a href="mailto:tim.simpson@rackspace.com">tim.simpson@rackspace.com</a>]<br>
Sent: Wednesday, December 18, 2013 11:34 AM<br>
<div class="im">To: OpenStack Development Mailing List (not for usage questions)<br>
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent<br>
<br>
Thanks for the summary Dmitry. I'm ok with these ideas, and while I still disagree with having a single, forced standard for RPC communication, I should probably let things pan out a bit before being too concerned.<br>
<br>
- Tim<br>
<br>
<br>
________________________________<br>
From: Dmitry Mescheryakov [<a href="mailto:dmescheryakov@mirantis.com">dmescheryakov@mirantis.com</a>]<br>
Sent: Wednesday, December 18, 2013 11:51 AM<br>
To: OpenStack Development Mailing List (not for usage questions)<br>
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent<br>
<br>
Tim,<br>
<br>
The unified agent we proposing is based on the following ideas:<br>
* the core agent has _no_ functionality at all. It is a pure RPC mechanism with the ability to add whichever API needed on top of it.<br>
* the API is organized into modules which could be reused across different projects.<br>
* there will be no single package: each project (Trove/Savanna/Others) assembles its own agent based on API project needs.<br>
<br>
I hope that covers your concerns.<br>
<br>
Dmitry<br>
<br>
<br>
</div>
2013/12/18 Tim Simpson <<a href="mailto:tim.simpson@rackspace.com">tim.simpson@rackspace.com</a><mailto:<a href="mailto:tim.simpson@rackspace.com">tim.simpson@rackspace.com</a>>><br>
<div class="im">I've been following the Unified Agent mailing list thread for awhile now and, as someone who has written a fair amount of code for both of the two existing Trove agents, thought I should give my opinion about it. I like the idea of a unified
agent, but believe that forcing Trove to adopt this agent for use as its by default will stifle innovation and harm the project.<br>
<br>
There are reasons Trove has more than one agent currently. While everyone knows about the "Reference Agent" written in Python, Rackspace uses a different agent written in C++ because it takes up less memory. The concerns which led to the C++ agent would not
be addressed by a unified agent, which if anything would be larger than the Reference Agent is currently.<br>
<br>
I also believe a unified agent represents the wrong approach philosophically. An agent by design needs to be lightweight, capable of doing exactly what it needs to and no more. This is especially true for a project like Trove whose goal is to not to provide
overly general PAAS capabilities but simply installation and maintenance of different datastores. Currently, the Trove daemons handle most logic and leave the agents themselves to do relatively little. This takes some effort as many of the first iterations
of Trove features have too much logic put into the guest agents. However through perseverance the subsequent designs are usually cleaner and simpler to follow. A community approved, "do everything" agent would endorse the wrong balance and lead to developers
piling up logic on the guest side. Over time, features would become dependent on the Unified Agent, making it impossible to run or even contemplate light-weight agents.<br>
<br>
Trove's interface to agents today is fairly loose and could stand to be made stricter. However, it is flexible and works well enough. Essentially, the duck typed interface of the trove.guestagent.api.API class is used to send messages, and Trove conductor is
used to receive them at which point it updates the database. Because both of these components can be swapped out if necessary, the code could support the Unified Agent when it appears as well as future agents.<br>
<br>
It would be a mistake however to alter Trove's standard method of communication to please the new Unified Agent. In general, we should try to keep Trove speaking to guest agents in Trove's terms alone to prevent bloat.<br>
<br>
Thanks,<br>
<br>
Tim<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
</div>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><mailto:<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a>><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<div class="">
<div class="h5"><br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</body>
</html>