<div dir="ltr">>> What is the exact scenario you're trying to avoid?<div><br></div><div>It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server (Salt / Our own self-written server). Looking at the design, it doesn't look like the attack could be somehow contained within a tenant it is coming from. </div>
<div><br></div><div>In the current OpenStack design I see only one similarly vulnerable component - metadata server. Keeping that in mind, maybe I just overestimate the threat?</div></div><div class="gmail_extra"><br><br>
<div class="gmail_quote">2013/12/10 Clint Byrum <span dir="ltr"><<a href="mailto:clint@fewbar.com" target="_blank">clint@fewbar.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Excerpts from Dmitry Mescheryakov's message of 2013-12-10 11:08:58 -0800:<br>
<div><div class="h5">> 2013/12/10 Clint Byrum <<a href="mailto:clint@fewbar.com">clint@fewbar.com</a>><br>
><br>
> > Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26 -0800:<br>
> > > And one more thing,<br>
> > ><br>
> > > Sandy Walsh pointed to the client Rackspace developed and use - [1], [2].<br>
> > > Its design is somewhat different and can be expressed by the following<br>
> > > formulae:<br>
> > ><br>
> > > App -> Host (XenStore) <-> Guest Agent<br>
> > ><br>
> > > (taken from the wiki [3])<br>
> > ><br>
> > > It has an obvious disadvantage - it is hypervisor dependent and currently<br>
> > > implemented for Xen only. On the other hand such design should not have<br>
> > > shared facility vulnerability as Agent accesses the server not directly<br>
> > but<br>
> > > via XenStore (which AFAIU is compute node based).<br>
> > ><br>
> ><br>
> > I don't actually see any advantage to this approach. It seems to me that<br>
> > it would be simpler to expose and manage a single network protocol than<br>
> > it would be to expose hypervisor level communications for all hypervisors.<br>
> ><br>
><br>
> I think the Rackspace agent design could be expanded as follows:<br>
><br>
> Controller (Savanna/Trove) <-> AMQP/ZeroMQ <-> Agent on Compute host <-><br>
> XenStore <-> Guest Agent<br>
><br>
> That is somewhat speculative because if I understood it correctly the<br>
> opened code covers only the second part of exchange:<br>
><br>
> Python API / CMD interface <-> XenStore <-> Guest Agent<br>
><br>
> Assuming I got it right:<br>
> While more complex, such design removes pressure from AMQP/ZeroMQ<br>
> providers: on the 'Agent on Compute' you can easily control the amount of<br>
> messages emitted by Guest with throttling. It is easy since such agent runs<br>
> on a compute host. In the worst case, if it is happened to be abused by a<br>
> guest, it affect this compute host only and not the whole segment of<br>
> OpenStack.<br>
><br>
<br>
</div></div>This still requires that we also write a backend to talk to the host<br>
for all virt drivers. It also means that any OS we haven't written an<br>
implementation for needs to be hypervisor-aware. That sounds like a<br>
never ending battle.<br>
<br>
If it is just a network API, it works the same for everybody. This<br>
makes it simpler, and thus easier to scale out independently of compute<br>
hosts. It is also something we already support and can very easily expand<br>
by just adding a tiny bit of functionality to neutron-metadata-agent.<br>
<br>
In fact we can even push routes via DHCP to send agent traffic through<br>
a different neutron-metadata-agent, so I don't see any issue where we<br>
are piling anything on top of an overstressed single resource. We can<br>
have neutron route this traffic directly to the Heat API which hosts it,<br>
and that can be load balanced and etc. etc. What is the exact scenario<br>
you're trying to avoid?<br>
<div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>