<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2013/12/13 Alessandro Pilotti <span dir="ltr"><<a href="mailto:apilotti@cloudbasesolutions.com" target="_blank">apilotti@cloudbasesolutions.com</a>></span><br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi guys,<br>
<br>
This seems to become a pretty long thread with quite a lot of ideas. What do you think about setting up a meeting on IRC to talk about what direction to take?<br>
IMO this has the potential of becoming a completely separated project to be hosted on stackforge or similar.<br>
<br>
Generally speaking, we already use Cloudbase-Init, which beside being the de facto standard Windows "Cloud-Init type feature” (Apache 2 licensed)<br>
has been recently used as a base to provide the same functionality on FreeBSD.<br>
<br>
For reference: <a href="https://github.com/cloudbase/cloudbase-init" target="_blank">https://github.com/cloudbase/cloudbase-init</a> and <a href="http://www.cloudbase.it/cloud-init-for-windows-instances/" target="_blank">http://www.cloudbase.it/cloud-init-for-windows-instances/</a><br>
<br>
We’re seriously thinking if we should transform Cloudbase-init into an agent or if we should keep it on line with the current “init only, let the guest to the rest” approach which fits pretty<br>
well with the most common deployment approaches (Heat, Puppet / Chef, Salt, etc). Last time I spoke with Scott about this agent stuff for cloud-init, the general intention was<br>
to keep the init approach as well (please correct me if I missed something in the meantime).<br>
<br>
The limitations that we see, independently from which direction and tool will be adopted for the agent, are mainly in the metadata services and the way OpenStack users employ them to<br>
communicate with Nova, Heat and the rest of the pack as orchestration requirements complexity increases:<br>
<br>
1) We need a way to post back small amounts of data (e.g. like we already do for the encrypted Windows password) for status updates,<br>
so that the users know how things are going and can be properly notified in case of post-boot errors. This might be irrelevant as long as you just create a user and deploy some SSH keys,<br>
but becomes very important for most orchestration templates.<br>
<br>
2) The HTTP metadata service accessible from the guest with its magic number is IMO quite far from an optimal solution. Since every hypervisor commonly<br>
used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host communication services, we could define a common abstraction layer which will<br>
include a guest side (to be included in cloud-init, cloudbase-init, etc) and a hypervisor side, to be implemented for each hypervisor and included in the related Nova drivers.<br>
This has already been proposed / implemented in various third party scenarios, but never under the OpenStack umbrella for multiple hypervisors.<br>
<br>
Metadata info can be at that point retrieved and posted by the Nova driver in a secure way and proxied to / from the guest whithout needing to expose the metadata<br>
service to the guest itself. This would also simplify Neutron, as we could get rid of the complexity of the Neutron metadata proxy.<br>
<span class=""><font color="#888888"><br></font></span></blockquote><div><br></div><div>The idea was discussed in the thread with name 'hypervisor-dependent agent'. A couple existing agents were proposed: Rackspace agent for Xen [1][2] and oVirt agent for Qemu [3].</div>
<div><br></div><div>Many people prefer the idea of hypervisor independent agent which will communicate over network (network agent). The main disadvantage of hypervisor-dependent agent is obviously the number of implementations need to be made for different hypervisors/OSes. Also it needs a daemon (in fact - another agent) running on each Compute host.</div>
<div><br></div><div>IMHO these are very strong arguments for network-based agent. If we start with hypervisor-dependent agent, it will just take too much time to do enough implementations. On the other hand, these two types of agents can share some code. So if need arise, people can write hypervisor-dependent agent based on network one, or behaving the same way. AFAIK, that is how Trove is deployed in Rackspace. Trove has network-based agent, and Rackspace replaces it with their own implementation.</div>
<div><br></div><div><br></div><div><span style="font-family:arial,sans-serif;font-size:13px">[1] </span><a href="https://github.com/rackerlabs/openstack-guest-agents-unix" target="_blank" style="font-family:arial,sans-serif;font-size:13px">https://github.com/rackerlabs/openstack-guest-agents-unix</a><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">[2] </span><a href="https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver" target="_blank" style="font-family:arial,sans-serif;font-size:13px">https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver</a><br>
</div><div>[3] <a href="https://github.com/oVirt/ovirt-guest-agent" target="_blank" style="font-family:arial,sans-serif;font-size:13px">https://github.com/oVirt/ovirt-guest-agent</a><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span class=""><font color="#888888">
<br>
<br>
Alessandro<br>
</font></span><div class=""><div class="h5"><br>
<br>
On 13 Dec 2013, at 16:28 , Scott Moser <<a href="mailto:smoser@ubuntu.com">smoser@ubuntu.com</a>> wrote:<br>
<br>
> On Tue, 10 Dec 2013, Ian Wells wrote:<br>
><br>
>> On 10 December 2013 20:55, Clint Byrum <<a href="mailto:clint@fewbar.com">clint@fewbar.com</a>> wrote:<br>
>><br>
>>> If it is just a network API, it works the same for everybody. This<br>
>>> makes it simpler, and thus easier to scale out independently of compute<br>
>>> hosts. It is also something we already support and can very easily expand<br>
>>> by just adding a tiny bit of functionality to neutron-metadata-agent.<br>
>>><br>
>>> In fact we can even push routes via DHCP to send agent traffic through<br>
>>> a different neutron-metadata-agent, so I don't see any issue where we<br>
>>> are piling anything on top of an overstressed single resource. We can<br>
>>> have neutron route this traffic directly to the Heat API which hosts it,<br>
>>> and that can be load balanced and etc. etc. What is the exact scenario<br>
>>> you're trying to avoid?<br>
>>><br>
>><br>
>> You may be making even this harder than it needs to be. You can create<br>
>> multiple networks and attach machines to multiple networks. Every point so<br>
>> far has been 'why don't we use <idea> as a backdoor into our VM without<br>
>> affecting the VM in any other way' - why can't that just be one more<br>
>> network interface set aside for whatever management instructions are<br>
>> appropriate? And then what needs pushing into Neutron is nothing more<br>
>> complex than strong port firewalling to prevent the slaves/minions talking<br>
>> to each other. If you absolutely must make the communication come from a<br>
><br>
> +1<br>
><br>
> tcp/ip works *really* well as a communication mechanism. I'm planning on<br>
> using it to send this email.<br>
><br>
> For controlled guests, simply don't break your networking. Anything that<br>
> could break networking can break /dev/<hypervisor-socket> also.<br>
><br>
> Fwiw, we already have an extremely functional "agent" in just about every<br>
> [linux] node in sshd. Its capable of marshalling just about anything in<br>
> and out of the node. (note, i fully realize there are good reasons for<br>
> more specific agent, lots of them exist).<br>
><br>
> I've really never understood "we don't want to rely on networking as a<br>
> transport".<br>
><br>
>> system agent and go to a VM, then that can be done by attaching the system<br>
>> agent to the administrative network - from within the system agent, which<br>
>> is the thing that needs this, rather than within Neutron, which doesn't<br>
>> really care how you use its networks. I prefer solutions where other tools<br>
>> don't have to make you a special case.<br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>