<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 12/12/2013 10:24 AM, Dmitry
      Mescheryakov wrote:<br>
    </div>
    <blockquote
cite="mid:CALU-hjGzSZjoMO=+NfwsTgUQipR0T1ZOhGMB5cjPFo3y8haCug@mail.gmail.com"
      type="cite">
      <div dir="ltr">Clint, Kevin,
        <div><br>
        </div>
        <div>Thanks for reassuring me :-) I just wanted to make sure
          that having direct access from VMs to a single facility is not
          a dead end in terms of security and extensibility. And since
          it is not, I agree it is much simpler (and hence better) than
          hypervisor-dependent design.</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>Then returning to two major suggestions made:</div>
        <div> * Salt</div>
        <div> * Custom solution specific to our needs </div>
        <div><br>
        </div>
        <div>The custom solution could be made on top of oslo.messaging.
          That gives us RPC working on different messaging systems. And
          that is what we really need - an RPC into guest supporting
          various transports. What it lacks at the moment is security -
          it has neither authentication nor ACL.</div>
        <div><br>
        </div>
        <div>Salt also provides RPC service, but it has a couple of
          disadvantages: it is tightly coupled with ZeroMQ and it needs
          a server process to run. A single transport option (ZeroMQ) is
          a limitation we really want to avoid. OpenStack could be
          deployed with various messaging providers, and we can't limit
          the choice to a single option in the guest agent. Though it
          could be changed in the future, it is an obstacle to consider.</div>
        <div><br>
        </div>
        <div>Running yet another server process within OpenStack, as it
          was already pointed out, is expensive. It means another server
          to deploy and take care of, +1 to overall OpenStack
          complexity. And it does not look it could be fixed any time
          soon.</div>
        <div><br>
        </div>
        <div>For given reasons I give favor to an agent based on
          oslo.messaging. </div>
        <div><br>
        </div>
      </div>
    </blockquote>
    <br>
    An agent based on oslo.messaging is a potential security attack
    vector and a possible scalability problem.  We do not want the guest
    agents communicating over the same RPC servers as the rest of
    OpenStack<br>
    <blockquote
cite="mid:CALU-hjGzSZjoMO=+NfwsTgUQipR0T1ZOhGMB5cjPFo3y8haCug@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>Thanks,</div>
        <div><br>
        </div>
        <div>Dmitry</div>
        <div><br>
        </div>
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">
          2013/12/11 Fox, Kevin M <span dir="ltr"><<a
              moz-do-not-send="true" href="mailto:kevin.fox@pnnl.gov"
              target="_blank">kevin.fox@pnnl.gov</a>></span><br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            Yeah. Its likely that the metadata server stuff will get
            more scalable/hardened over time. If it isn't enough now,
            lets fix it rather then coming up with a new system to work
            around it.<br>
            <br>
            I like the idea of using the network since all the
            hypervisors have to support network drivers already. They
            also already have to support talking to the metadata server.
            This keeps OpenStack out of the hypervisor driver business.<br>
            <br>
            Kevin<br>
            <br>
            ________________________________________<br>
            From: Clint Byrum [<a moz-do-not-send="true"
              href="mailto:clint@fewbar.com">clint@fewbar.com</a>]<br>
            Sent: Tuesday, December 10, 2013 1:02 PM<br>
            To: openstack-dev<br>
            <div class="im HOEnZb">Subject: Re: [openstack-dev] Unified
              Guest Agent proposal<br>
              <br>
            </div>
            <div class="HOEnZb">
              <div class="h5">Excerpts from Dmitry Mescheryakov's
                message of 2013-12-10 12:37:37 -0800:<br>
                > >> What is the exact scenario you're trying
                to avoid?<br>
                ><br>
                > It is DDoS attack on either transport (AMQP /
                ZeroMQ provider) or server<br>
                > (Salt / Our own self-written server). Looking at
                the design, it doesn't<br>
                > look like the attack could be somehow contained
                within a tenant it is<br>
                > coming from.<br>
                ><br>
                <br>
                We can push a tenant-specific route for the metadata
                server, and a tenant<br>
                specific endpoint for in-agent things. Still simpler
                than hypervisor-aware<br>
                guests. I haven't seen anybody ask for this yet, though
                I'm sure if they<br>
                run into these problems it will be the next logical
                step.<br>
                <br>
                > In the current OpenStack design I see only one
                similarly vulnerable<br>
                > component - metadata server. Keeping that in mind,
                maybe I just<br>
                > overestimate the threat?<br>
                ><br>
                <br>
                Anything you expose to the users is "vulnerable". By
                using the localized<br>
                hypervisor scheme you're now making the compute node
                itself vulnerable.<br>
                Only now you're asking that an already complicated thing
                (nova-compute)<br>
                add another job, rate limiting.<br>
                <br>
                _______________________________________________<br>
                OpenStack-dev mailing list<br>
                <a moz-do-not-send="true"
                  href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
                <a moz-do-not-send="true"
                  href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"
                  target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                <br>
                _______________________________________________<br>
                OpenStack-dev mailing list<br>
                <a moz-do-not-send="true"
                  href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
                <a moz-do-not-send="true"
                  href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"
                  target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
OpenStack-dev mailing list
<a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>