[openstack-dev] Unified Guest Agent proposal

Dmitry Mescheryakov dmescheryakov at mirantis.com
Tue Dec 10 20:37:37 UTC 2013


>> What is the exact scenario you're trying to avoid?

It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
(Salt / Our own self-written server). Looking at the design, it doesn't
look like the attack could be somehow contained within a tenant it is
coming from.

In the current OpenStack design I see only one similarly vulnerable
component - metadata server. Keeping that in mind, maybe I just
overestimate the threat?


2013/12/10 Clint Byrum <clint at fewbar.com>

> Excerpts from Dmitry Mescheryakov's message of 2013-12-10 11:08:58 -0800:
> > 2013/12/10 Clint Byrum <clint at fewbar.com>
> >
> > > Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26
> -0800:
> > > > And one more thing,
> > > >
> > > > Sandy Walsh pointed to the client Rackspace developed and use - [1],
> [2].
> > > > Its design is somewhat different and can be expressed by the
> following
> > > > formulae:
> > > >
> > > > App -> Host (XenStore) <-> Guest Agent
> > > >
> > > > (taken from the wiki [3])
> > > >
> > > > It has an obvious disadvantage - it is hypervisor dependent and
> currently
> > > > implemented for Xen only. On the other hand such design should not
> have
> > > > shared facility vulnerability as Agent accesses the server not
> directly
> > > but
> > > > via XenStore (which AFAIU is compute node based).
> > > >
> > >
> > > I don't actually see any advantage to this approach. It seems to me
> that
> > > it would be simpler to expose and manage a single network protocol than
> > > it would be to expose hypervisor level communications for all
> hypervisors.
> > >
> >
> > I think the Rackspace agent design could be expanded as follows:
> >
> > Controller (Savanna/Trove) <-> AMQP/ZeroMQ <-> Agent on Compute host <->
> > XenStore <-> Guest Agent
> >
> > That is somewhat speculative because if I understood it correctly the
> > opened code covers only the second part of exchange:
> >
> > Python API / CMD interface <-> XenStore <-> Guest Agent
> >
> > Assuming I got it right:
> > While more complex, such design removes pressure from AMQP/ZeroMQ
> > providers: on the 'Agent on Compute' you can easily control the amount of
> > messages emitted by Guest with throttling. It is easy since such agent
> runs
> > on a compute host. In the worst case, if it is happened to be abused by a
> > guest, it affect this compute host only and not the whole segment of
> > OpenStack.
> >
>
> This still requires that we also write a backend to talk to the host
> for all virt drivers. It also means that any OS we haven't written an
> implementation for needs to be hypervisor-aware. That sounds like a
> never ending battle.
>
> If it is just a network API, it works the same for everybody. This
> makes it simpler, and thus easier to scale out independently of compute
> hosts. It is also something we already support and can very easily expand
> by just adding a tiny bit of functionality to neutron-metadata-agent.
>
> In fact we can even push routes via DHCP to send agent traffic through
> a different neutron-metadata-agent, so I don't see any issue where we
> are piling anything on top of an overstressed single resource. We can
> have neutron route this traffic directly to the Heat API which hosts it,
> and that can be load balanced and etc. etc. What is the exact scenario
> you're trying to avoid?
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131211/55a1391d/attachment.html>


More information about the OpenStack-dev mailing list