[openstack-dev] Unified Guest Agent proposal

Fox, Kevin M kevin.fox at pnnl.gov
Fri Dec 13 19:32:01 UTC 2013

Hmm.. so If I understand right, the concern you started is something like:
 * You start up a vm
 * You make it available to your users to ssh into
 * They could grab the machine's metadata

I hadn't thought about that use case, but that does sound like it would be a problem.

Ok, so... the problem there is that you need a secrets passed to the vm but the network trick isn't secure enough to pass the secret, hence the config drive like trick since only root/admin can read the data.

Now, that does not sound like it excludes the possibility of using the metadata server idea in combination with cloud drive to make things secure. You could use cloud drive to pass a cert, and then have the metadata server require that cert in order to ensure only the vm itself can pull any additional metadata.

The unified guest agent could use the same cert/server to establish trust too.

Does that address the issue?

From: Alessandro Pilotti [apilotti at cloudbasesolutions.com]
Sent: Friday, December 13, 2013 10:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

18:39 , Clint Byrum <clint at fewbar.com> wrote:

> Excerpts from Alessandro Pilotti's message of 2013-12-13 07:13:01 -0800:
>> Hi guys,
>> This seems to become a pretty long thread with quite a lot of ideas. What do you think about setting up a meeting on IRC to talk about what direction to take?
>> IMO this has the potential of becoming a completely separated project to be hosted on stackforge or similar.
>> Generally speaking, we already use Cloudbase-Init, which beside being the de facto standard Windows "Cloud-Init type feature” (Apache 2 licensed)
>> has been recently used as a base to provide the same functionality on FreeBSD.
>> For reference: https://github.com/cloudbase/cloudbase-init and http://www.cloudbase.it/cloud-init-for-windows-instances/
>> We’re seriously thinking if we should transform Cloudbase-init into an agent or if we should keep it on line with the current “init only, let the guest to the rest” approach which fits pretty
>> well with the most common deployment approaches (Heat, Puppet / Chef, Salt, etc). Last time I spoke with Scott about this agent stuff for cloud-init, the general intention was
>> to keep the init approach as well (please correct me if I missed something in the meantime).
>> The limitations that we see, independently from which direction and tool will be adopted for the agent, are mainly in the metadata services and the way OpenStack users employ them to
>> communicate with Nova, Heat and the rest of the pack as orchestration requirements complexity increases:
> Hi, Allessandro. Really interesting thoughts. Most of what you have
> described that is not about agent transport is what we discussed
> at the Icehouse summit under the topic of the hot-software-config
> blueprint. There is definitely a need for better workflow integration
> in Heat, and that work is happening now.

This is great news. I was aware about this effort but didn’t know that it’s already in such an advanced stage. Looking forward to check it out these days!

>> 1) We need a way to post back small amounts of data (e.g. like we already do for the encrypted Windows password) for status updates,
>> so that the users know how things are going and can be properly notified in case of post-boot errors. This might be irrelevant as long as you just create a user and deploy some SSH keys,
>> but becomes very important for most orchestration templates.
> Heat already has this via wait conditions. hot-software-config will
> improve upon this. I believe once a unified guest agent protocol is
> agreed upon we will make Heat use that for wait condition signalling.
>> 2) The HTTP metadata service accessible from the guest with its magic number is IMO quite far from an optimal solution. Since every hypervisor commonly
>> used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host communication services, we could define a common abstraction layer which will
>> include a guest side (to be included in cloud-init, cloudbase-init, etc) and a hypervisor side, to be implemented for each hypervisor and included in the related Nova drivers.
>> This has already been proposed / implemented in various third party scenarios, but never under the OpenStack umbrella for multiple hypervisors.
>> Metadata info can be at that point retrieved and posted by the Nova driver in a secure way and proxied to / from the guest whithout needing to expose the metadata
>> service to the guest itself. This would also simplify Neutron, as we could get rid of the complexity of the Neutron metadata proxy.
> The neutron metadata proxy is actually relatively simple. Have a look at
> it. The basic way it works in pseudo code is:
> port = lookup_requesting_ip_port(remote_ip)
> instance_id = lookup_port_instance_id(port)
> response = forward_and_sign_request_to_nova(REQUEST, instance_id, conf.nova_metadata_ip)
> return response

Heh, I’m quite familiar with the Neutron metadata agent, as we had to patch it to get metadata POST working for the Windows password generation. :-)

IMO, metadata exposed to guests via HTTP suffers from security issues due to direct exposure to guests (think DOS in the best case) and requires additional complexity for fault tolerance
and high availability just to name a few issues.
Beside that, folks that embraced ConfigDrive for this or other reasons are cut out from the metadata POST option, as by definition a CDROM drive is read only.

I was sure that this was going to ge a bit of a hot topic ;). There are IMHO valid arguments on both sides, I don’t even see it as a mandatory alternative choice,
just one additional option which is being discussed since a while.

The design and implementation IMO would be fairly easy, with the big advantage that it would remove most of the complexity from the deployers.

> Furthermore, if we have to embrace some complexity, I would rather do so
> inside Neutron than in an agent that users must install and make work
> on every guest OS.
> The dumber an agent is, the better it will scale and more resilient it
> will be. I would credit this principle with the success of cloud-init
> (sorry, you know I love you Scott! ;). What we're talking about now is
> having an equally dumb, but differently focused agent.

I’m not discussing about the dumbness of the agent and being the mantainer of a *-Init framework (just not call it agent) I definitely agree that the real business logic, to say so,
resides is the user_data (simple / multipart / heat/ etc) provided by the user. A transport protocol abstraction over the guest/hypervisor channels is IMO a fairly simple and again “dumb” feature.


> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org

More information about the OpenStack-dev mailing list