<div dir="ltr"><div>Hello, </div><div><br></div><div>First, I realized that there was no meeting today, was there a problem, or change in schedule? </div><div><br></div><div>I would like to know if anyone has tested/Reviewed on the prototype that we implemented Telles and I, what did you think? I would like to know if there is any other functionality that can help. </div>
<div><br></div><div>Thank you in advance.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-02-26 1:14 GMT-03:00 Adam Young <span dir="ltr"><<a href="mailto:ayoung@redhat.com" target="_blank">ayoung@redhat.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    <div>On 02/20/2014 05:18 PM, Vishvananda
      Ishaya wrote:<br>
    </div>
    <blockquote type="cite">
      
      <br>
      <div>
        <div>On Feb 19, 2014, at 5:58 PM, Adam Young <<a href="mailto:ayoung@redhat.com" target="_blank">ayoung@redhat.com</a>>
          wrote:</div>
        <br>
        <blockquote type="cite">
          
          <div bgcolor="#FFFFFF" text="#000000">
            <div>On 02/18/2014 02:28 PM,
              Vishvananda Ishaya wrote:<br>
            </div>
            <blockquote type="cite">
              
              <br>
              <div>
                <div>On Feb 18, 2014, at 11:04 AM, Adam Young <<a href="mailto:ayoung@redhat.com" target="_blank">ayoung@redhat.com</a>>

                  wrote:</div>
                <br>
                <blockquote type="cite">
                  
                  <div bgcolor="#FFFFFF" text="#000000">
                    <div>On 02/18/2014 12:53 PM,
                      Telles Nobrega wrote:<br>
                    </div>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div class="gmail_default" style="font-family:tahoma,sans-serif">Hello
                          everyone,</div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"><br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"> Me and
                          Raildo were responsible to implement
                          Hierarchical Projects in Keystone.</div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"><br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"> Here is
                          our first prototype: <a href="https://github.com/tellesnobrega/keystone_hierarchical_projects" target="_blank">https://github.com/tellesnobrega/keystone_hierarchical_projects</a></div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"> <br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif">We want
                          to have it tested with Vishy's implementation
                          this week.<br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"><br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif">Here is
                          a  guide on how to test the implementation:<br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"><br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif"> 1.
                          Start a devstack using the keystone code;</div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif">2.
                          Create a new project using the following body:</div>
                        <div class="gmail_default"><span style="font-family:tahoma,sans-serif">    </span><font face="tahoma, sans-serif">{</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">    "project": {</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">        "description":
                            "test_project",</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">        "domain_id": "default",</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">        "parent_project_id":
                            "$parent_project_id",</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">        "enabled": true,</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">        "name": "test_project"</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">    }</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">}</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif"><br>
                          </font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">3. Give an user a role in the
                            project;</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">4. Get a token for
                            "test_project" and check that the hierarchy
                            is there like the following:</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">    </font><span>{</span></div>
                        <pre style="margin-top:0px;margin-bottom:0px;padding:5px 0px;font-family:'Bitstream Vera Sans Mono',monospace;font-size:13px">    "token": {
        "methods": [
            "password"
        ],
        "roles": [
            {
                "id": "c60f0d7461354749ae8ac8bace3e35c5",
                "name": "admin"
            }
        ],
        "expires_at": "2014-02-18T15:52:03.499433Z",
        "project": {
            "hierarchical_ids": "<div class="gmail_default" style="font-family:tahoma,sans-serif;display:inline">openstack.</div>8a4ebcf44ebc47e0b98d3d5780c1f71a.de2a7135b01344cd82a02117c005ce47",</pre>

                      </div>
                    </blockquote>
                    <br>
                    These should be names, not Ids.  There is going to
                    be a need to move projecst around inside the
                    hierarchy, and the ID stays the same.  Lets get this
                    right up front.<br>
                  </div>
                </blockquote>
                <div><br>
                </div>
                Can you give more detail here? I can see arguments for
                both ways of doing this but continuing to use ids for
                ownership is an easier choice. Here is my thinking:</div>
              <div><br>
              </div>
              <div>1. all of the projects use ids for ownership
                currently so it is a smaller change</div>
            </blockquote>
            That does not change.  It is the hierarchy that is labeled
            by name.<br>
          </div>
        </blockquote>
        <div><br>
        </div>
        The issue is that we are storing the hierarchy of ownership in
        nova. We can either store the hierarchy by id or by name. Note
        that we are not adding a new field for this hierarchy but using
        the existing ownership field (which is called project_id in
        nova). My point is that if we use ids, then this field would be
        backwards compatible. If we decide to use name instead (which
        has some advantages for display purposes), then we would need
        some kind of db sync migration which modifies all of the fields
        from id -> name.<br>
        <blockquote type="cite">
          <div bgcolor="#FFFFFF" text="#000000"> <br>
            <blockquote type="cite">
              <div>2. renaming a project in keystone would not
                invalidate the ownership hierarchy (Note that moving a
                project around would invalidate the hierarchy in both
                cases)</div>
              <div><br>
              </div>
            </blockquote>
            Renaming would not change anything. <br>
            <br>
            I would say the rule should be this:  Ids are basically
            uuids, and are immutable.  Names a mutable.  Each project
            has a parent Id.  A project can either be referenced
            directly by ID, oir hierarchically by name.  In addition,
            you can navigate to a project by traversing the set of ids,
            but you need to know where you are going.  THus the array <br>
            <br>
            ['abcd1234',fedd3213','3e3e3e3e'] would be a way to find a
            project, but the project ID for the lead node would still be
            just '3e3e3e3e’.<br>
          </div>
        </blockquote>
        <div><br>
        </div>
        As I mention above, all of this makes sense inside of keystone,
        but doesn’t address the problem of how we are storing the
        hierarchy on the nova side. The owner field in nova can be:</div>
      <div><br>
      </div>
      <div>1) abcd1234.fedd3213.3e3e3e3e</div>
      <div><br>
      </div>
      <div>or it can be:</div>
      <div><br>
      </div>
      <div>2) orga.proja.suba</div>
    </blockquote>
    <br>
    Owner should be separate from project.  But that is an aside.  I
    think you are mixing two ideas together.  Lets sit down at the
    summit to clear this up, but the Ids should not be hierarchical, the
    names should, and if you mess with that, it is going to be, well, a
    mess....<br>
    <br>
    We have a lot going on getting ready for Icehouse 3, and I don't
    want to be rushed on this, as we will have to live with it for a
    long time.  Nova is not the only consumer of projects, and we need
    to make something that works across the board.<br>
    <br>
    <br>
    <blockquote type="cite">
      <div><br>
      </div>
      <div>To explicitly state the tradeoffs</div>
      <div><br>
      </div>
      <div>1 is backwards compatible +</div>
    </blockquote>
    We are actually doing something like this for domain users: 
    userid@@domainid  where both are UUIDs (or possible userid comes out
    of ldap) but the hierarchy there is only two level.  It is necessary
    there because one part is assigned by keysotne (domain id) and one
    part by LDAP or the remote IdP.<br>
    <blockquote type="cite">
      <div>1 doesn’t need to be updated if a project is renamed +</div>
    </blockquote>
    But it does need to be redone of the project gets moved in the
    hierarchy, and we have a pre-existing feature request for that.<br>
    <br>
    <blockquote type="cite">
      <div>1 is not user friendly (need to map ids to names to display
        to the user) -</div>
    </blockquote>
    You need to walk the tree to generate the "good" name.  But that can
    also be used to navigate.  Path names like URLs are unsurprising. 
    Hieriarch IDs are not.<br>
    <blockquote type="cite">
      <div><br>
      </div>
      <div>both need to be updated if a project is moved in the
        hierarchy</div>
    </blockquote>
    Notif the project only knows its local name.<br>
    <br>
    Owner can continue to be the short id.  You onlky need to map to
    translate for readability.  Its like SQL:  use a view to
    denormalize.<br>
    <blockquote type="cite">
      <div><br>
      </div>
      <div>Vish</div>
      <div><br>
        <blockquote type="cite">
          <div bgcolor="#FFFFFF" text="#000000"> <br>
            <br>
            <blockquote type="cite">
              <div>OTOTH the advantage of names is it makes displaying
                the ownership much easier on the service side.</div>
              <div><br>
              </div>
              <div>Vish</div>
              <div><br>
              </div>
              <div>
                <blockquote type="cite">
                  <div bgcolor="#FFFFFF" text="#000000"> <br>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <pre style="margin-top:0px;margin-bottom:0px;padding:5px 0px;font-family:'Bitstream Vera Sans Mono',monospace;font-size:13px">            "hierarchy": "test1",
            "domain": {
                "id": "default",
                "name": "Default"
            },
            "id": "de2a7135b01344cd82a02117c005ce47",
            "name": "test1"
        },
        "extras": {},
        "user": {
            "domain": {
                "id": "default",
                "name": "Default"
            },
            "id": "895864161f1e4beaae42d9392ec105c8",
            "name": "admin"
        },
        "issued_at": "2014-02-18T14:52:03.499478Z"
    }
}</pre>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif"><br>
                          </font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif">Openstack is the root project of
                            the tree, it can be seen also when getting a
                            token for the admin project or other default
                            project in Devstack.</font></div>
                        <div class="gmail_default"><font face="tahoma,
                            sans-serif"><br>
                          </font></div>
                        <div class="gmail_default"><span style="font-family:tahoma,sans-serif">Hope
                            to hear your feedbacks soon.</span><font face="tahoma, sans-serif"><br>
                          </font></div>
                        <div class="gmail_default"><br>
                        </div>
                        <div class="gmail_default" style="font-family:tahoma,sans-serif">Thanks</div>
                      </div>
                      <div class="gmail_extra"><br>
                        <br>
                        <div class="gmail_quote">On Mon, Feb 17, 2014 at
                          6:09 AM, Vinod Kumar Boppanna <span dir="ltr"><<a href="mailto:vinod.kumar.boppanna@cern.ch" target="_blank">vinod.kumar.boppanna@cern.ch</a>></span>
                          wrote:<br>
                          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear Vish,<br>
                            <br>
                            I will change the concept of parsing roles
                            upto leaf node to parsing the roles to the
                            top upto the level 1. But i have small doubt
                            and i want to confirm with you before doing
                            this change.<br>
                            <br>
                            If there are lets say 10 levels in the
                            hierarchy and the user is getting
                            authenticated at level 9. Should i check the
                            roles starting from level 9 upto level 1.
                            Ofcourse, the difference here is (compared
                            to what i put in the wiki page) that, only
                            roles at each level (if different) needs to
                            be added to scope and no need of adding the
                            project name and role individually. Is this
                            ok, considering the fact that the more
                            deeper in the hierarchy the user is getting
                            authenticated, the more time needed to parse
                            upto the level 1.<br>
                            <br>
                            I will wait for your response and then
                            modify the POC accordingly.<br>
                            <div><br>
                              Thanks & Regards,<br>
                              Vinod Kumar Boppanna<br>
                            </div>
                            ________________________________________<br>
                            From: <a href="mailto:openstack-dev-request@lists.openstack.org" target="_blank">openstack-dev-request@lists.openstack.org</a>
                            [<a href="mailto:openstack-dev-request@lists.openstack.org" target="_blank">openstack-dev-request@lists.openstack.org</a>]<br>
                            Sent: 16 February 2014 22:21<br>
                            To: <a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a><br>
                            Subject: OpenStack-dev Digest, Vol 22, Issue
                            45<br>
                            <div><br>
                              Send OpenStack-dev mailing list
                              submissions to<br>
                                      <a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a><br>
                              <br>
                              To subscribe or unsubscribe via the World
                              Wide Web, visit<br>
                                      <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              or, via email, send a message with subject
                              or body 'help' to<br>
                                      <a href="mailto:openstack-dev-request@lists.openstack.org" target="_blank">openstack-dev-request@lists.openstack.org</a><br>
                              <br>
                              You can reach the person managing the list
                              at<br>
                                      <a href="mailto:openstack-dev-owner@lists.openstack.org" target="_blank">openstack-dev-owner@lists.openstack.org</a><br>
                              <br>
                              When replying, please edit your Subject
                              line so it is more specific<br>
                              than "Re: Contents of OpenStack-dev
                              digest..."<br>
                              <br>
                              <br>
                              Today's Topics:<br>
                              <br>
                            </div>
                               1. Re: [Nova][VMWare] VMwareVCDriver
                            related to resize/cold<br>
                                  migration (Gary Kotton)<br>
                               2. [Neutron]Do you think tanent_id should
                            be verified (Dong Liu)<br>
                               3. Re: [Nova][VMWare] VMwareVCDriver
                            related to resize/cold<br>
                                  migration (Jay Lau)<br>
                               4. [neutron][policy] Using network
                            services with network<br>
                                  policies (Mohammad Banikazemi)<br>
                               5. Re: [Nova][VMWare] VMwareVCDriver
                            related to resize/cold<br>
                                  migration (Jay Lau)<br>
                               6. Re: [keystone] role of Domain in VPC
                            definition (Harshad Nakil)<br>
                               7. Re: VPC Proposal (Harshad Nakil)<br>
                               8. Re: VPC Proposal (Allamaraju, Subbu)<br>
                               9. Re: VPC Proposal (Harshad Nakil)<br>
                              10. Re: VPC Proposal (Martin, JC)<br>
                              11. Re: [keystone] role of Domain in VPC
                            definition<br>
                                  (Allamaraju, Subbu)<br>
                              12. Re: [keystone] role of Domain in VPC
                            definition (Harshad Nakil)<br>
                              13. Re: [keystone] role of Domain in VPC
                            definition<br>
                                  (Allamaraju, Subbu)<br>
                              14. Re: [OpenStack-Infra] [TripleO]
                            promoting devtest_seed and<br>
                                  devtest_undercloud to voting, +
                            experimental queue for<br>
                                  nova/neutron etc. (Robert Collins)<br>
                              15. Re: [OpenStack-Infra] [TripleO]
                            promoting devtest_seed and<br>
                                  devtest_undercloud to voting, +
                            experimental queue for<br>
                                  nova/neutron etc. (Robert Collins)<br>
                              16. Re: [keystone] role of Domain in VPC
                            definition (Ravi Chunduru)<br>
                              17. Re: VPC Proposal (Ravi Chunduru)<br>
                              18. Re: OpenStack-dev Digest, Vol 22,
                            Issue 39 (Vishvananda Ishaya)<br>
                              19. Re: heat run_tests.sh fails with one
                            huge line    of      output<br>
                                  (Mike Spreitzer)<br>
                            <br>
                            <br>
----------------------------------------------------------------------<br>
                            <br>
                            Message: 1<br>
                            Date: Sun, 16 Feb 2014 05:40:05 -0800<br>
                            From: Gary Kotton <<a href="mailto:gkotton@vmware.com" target="_blank">gkotton@vmware.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [Nova][VMWare]
                            VMwareVCDriver related to<br>
                                    resize/cold migration<br>
                            Message-ID: <<a href="mailto:CF268BE4.465C7%25gkotton@vmware.com" target="_blank">CF268BE4.465C7%gkotton@vmware.com</a>><br>
                            <div>Content-Type: text/plain;
                              charset="us-ascii"<br>
                              <br>
                              Hi,<br>
                            </div>
                            There are two issues here.<br>
                            The first is a bug fix that is in review:<br>
                            - <a href="https://review.openstack.org/#/c/69209/" target="_blank">https://review.openstack.org/#/c/69209/</a>
                            (this is where they have the same
                            configuration)<br>
                            The second is WIP:<br>
                            - <a href="https://review.openstack.org/#/c/69262/" target="_blank">https://review.openstack.org/#/c/69262/</a>
                            (we need to restore)<br>
                            Thanks<br>
                            Gary<br>
                            <br>
                            From: Jay Lau <<a href="mailto:jay.lau.513@gmail.com" target="_blank">jay.lau.513@gmail.com</a><mailto:<a href="mailto:jay.lau.513@gmail.com" target="_blank">jay.lau.513@gmail.com</a>>><br>

                            Reply-To: "OpenStack Development Mailing
                            List (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a><mailto:<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>>><br>

                            Date: Sunday, February 16, 2014 6:39 AM<br>
                            To: OpenStack Development Mailing List <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a><mailto:<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>>><br>

                            Subject: [openstack-dev] [Nova][VMWare]
                            VMwareVCDriver related to resize/cold
                            migration<br>
                            <br>
                            Hey,<br>
                            <br>
                            I have one question related with OpenStack
                            vmwareapi.VMwareVCDriver resize/cold
                            migration.<br>
                            <br>
                            The following is my configuration:<br>
                            <br>
                             DC<br>
                                |<br>
                                |----Cluster1<br>
                                |          |<br>
                                |          |----9.111.249.56<br>
                                |<br>
                                |----Cluster2<br>
                                           |<br>
                                           |----9.111.249.49<br>
                            <br>
                            Scenario 1:<br>
                            I started two nova computes manage the two
                            clusters:<br>
                            1) nova-compute1.conf<br>
                            cluster_name=Cluster1<br>
                            <br>
                            2) nova-compute2.conf<br>
                            cluster_name=Cluster2<br>
                            <br>
                            3) Start up two nova computes on host1 and
                            host2 separately<br>
                            4) Create one VM instance and the VM
                            instance was booted on Cluster2 node
                             9.111.249.49<br>
                            | OS-EXT-SRV-ATTR:host                 |
                            host2 |<br>
                            | OS-EXT-SRV-ATTR:hypervisor_hostname  |
                            domain-c16(Cluster2)                        
                                        |<br>
                            5) Cold migrate the VM instance<br>
                            6) After migration finished, the VM goes to
                            VERIFY_RESIZE status, and "nova show"
                            indicates that the VM now located on
                            host1:Cluster1<br>
                            | OS-EXT-SRV-ATTR:host                 |
                            host1 |<br>
                            | OS-EXT-SRV-ATTR:hypervisor_hostname  |
                            domain-c12(Cluster1)                        
                                        |<br>
                            7) But from vSphere client, it indicates the
                            the VM was still running on Cluster2<br>
                            8) Try to confirm the resize, confirm will
                            be failed. The root cause is that nova
                            compute on host2 has no knowledge of
                            domain-c12(Cluster1)<br>
                            <br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File
                            "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
                            line 2810, in do_confirm_resize<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp    
                            migration=migration)<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File
                            "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
                            line 2836, in _confirm_resize<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp    
                            network_info)<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 420, in confirm_migration<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp     _vmops =
self._get_vmops_for_compute_node(instance['node'])<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 523, in _get_vmops_for_compute_node<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp     resource
                            = self._get_resource_for_node(nodename)<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 515, in _get_resource_for_node<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp     raise
                            exception.NotFound(msg)<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp NotFound:
                            NV-3AB798A The resource domain-c12(Cluster1)
                            does not exist<br>
                            2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            <br>
                            <br>
                            Scenario 2:<br>
                            <br>
                            1) Started two nova computes manage the two
                            clusters, but the two computes have same
                            nova conf.<br>
                            1) nova-compute1.conf<br>
                            cluster_name=Cluster1<br>
                            cluster_name=Cluster2<br>
                            <br>
                            2) nova-compute2.conf<br>
                            cluster_name=Cluster1<br>
                            cluster_name=Cluster2<br>
                            <br>
                            3) Then create and resize/cold migrate a VM,
                            it can always succeed.<br>
                            <br>
                            <br>
                            Questions:<br>
                            For multi-cluster management, does vmware
                            require all nova compute have same cluster
                            configuration to make sure resize/cold
                            migration can succeed?<br>
                            <br>
                            --<br>
                            Thanks,<br>
                            <br>
                            Jay<br>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/0b71a846/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/0b71a846/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 2<br>
                            Date: Sun, 16 Feb 2014 22:52:01 +0800<br>
                            From: Dong Liu <<a href="mailto:willowd878@gmail.com" target="_blank">willowd878@gmail.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: [openstack-dev] [Neutron]Do you
                            think tanent_id should be<br>
                                    verified<br>
                            Message-ID: <<a href="mailto:26565D39-5372-48A5-8299-34DDE6C3394D@gmail.com" target="_blank">26565D39-5372-48A5-8299-34DDE6C3394D@gmail.com</a>><br>
                            Content-Type: text/plain; charset=us-ascii<br>
                            <br>
                            Hi stackers:<br>
                            <br>
                            I found that when creating network subnet
                            and other resources, the attribute tenant_id<br>
                            can be set by admin tenant. But we did not
                            verify that if the tanent_id is real in
                            keystone.<br>
                            <br>
                            I know that we could use neutron without
                            keystone, but do you think tenant_id should<br>
                            be verified when we using neutron with
                            keystone.<br>
                            <br>
                            thanks<br>
                            <br>
                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 3<br>
                            Date: Sun, 16 Feb 2014 23:01:17 +0800<br>
                            From: Jay Lau <<a href="mailto:jay.lau.513@gmail.com" target="_blank">jay.lau.513@gmail.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [Nova][VMWare]
                            VMwareVCDriver related to<br>
                                    resize/cold migration<br>
                            Message-ID:<br>
                                    <<a href="mailto:CAFyztAFqTUqTZzzW6BkH6-9_kye9ZGm8yhZe3hMUoW1xFfQM7A@mail.gmail.com" target="_blank">CAFyztAFqTUqTZzzW6BkH6-9_kye9ZGm8yhZe3hMUoW1xFfQM7A@mail.gmail.com</a>><br>
                            Content-Type: text/plain;
                            charset="iso-8859-1"<br>
                            <br>
                            Thanks Gary, clear now. ;-)<br>
                            <br>
                            <br>
                            2014-02-16 21:40 GMT+08:00 Gary Kotton <<a href="mailto:gkotton@vmware.com" target="_blank">gkotton@vmware.com</a>>:<br>
                            <br>
                            > Hi,<br>
                            > There are two issues here.<br>
                            > The first is a bug fix that is in
                            review:<br>
                            > - <a href="https://review.openstack.org/#/c/69209/" target="_blank">https://review.openstack.org/#/c/69209/</a>
                            (this is where they have the<br>
                            > same configuration)<br>
                            > The second is WIP:<br>
                            > - <a href="https://review.openstack.org/#/c/69262/" target="_blank">https://review.openstack.org/#/c/69262/</a>
                            (we need to restore)<br>
                            > Thanks<br>
                            > Gary<br>
                            ><br>
                            > From: Jay Lau <<a href="mailto:jay.lau.513@gmail.com" target="_blank">jay.lau.513@gmail.com</a>><br>
                            > Reply-To: "OpenStack Development
                            Mailing List (not for usage questions)" <<br>
                            > <a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            > Date: Sunday, February 16, 2014 6:39 AM<br>
                            > To: OpenStack Development Mailing List
                            <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            > Subject: [openstack-dev] [Nova][VMWare]
                            VMwareVCDriver related to<br>
                            > resize/cold migration<br>
                            ><br>
                            > Hey,<br>
                            ><br>
                            > I have one question related with
                            OpenStack vmwareapi.VMwareVCDriver<br>
                            > resize/cold migration.<br>
                            ><br>
                            > The following is my configuration:<br>
                            ><br>
                            >  DC<br>
                            >     |<br>
                            >     |----Cluster1<br>
                            >     |          |<br>
                            >     |          |----9.111.249.56<br>
                            >     |<br>
                            >     |----Cluster2<br>
                            >                |<br>
                            >                |----9.111.249.49<br>
                            ><br>
                            > *Scenario 1:*<br>
                            > I started two nova computes manage the
                            two clusters:<br>
                            > 1) nova-compute1.conf<br>
                            > cluster_name=Cluster1<br>
                            ><br>
                            > 2) nova-compute2.conf<br>
                            > cluster_name=Cluster2<br>
                            ><br>
                            > 3) Start up two nova computes on host1
                            and host2 separately<br>
                            > 4) Create one VM instance and the VM
                            instance was booted on Cluster2 node<br>
                            > 9.111.249.49<br>
                            > | OS-EXT-SRV-ATTR:host                
                            | host2 |<br>
                            > | OS-EXT-SRV-ATTR:hypervisor_hostname
                             |<br>
                            > domain-c16(Cluster2)                  
                                              |<br>
                            > 5) Cold migrate the VM instance<br>
                            > 6) After migration finished, the VM
                            goes to VERIFY_RESIZE status, and<br>
                            > "nova show" indicates that the VM now
                            located on host1:Cluster1<br>
                            > | OS-EXT-SRV-ATTR:host                
                            | host1 |<br>
                            > | OS-EXT-SRV-ATTR:hypervisor_hostname
                             |<br>
                            > domain-c12(Cluster1)                  
                                              |<br>
                            > 7) But from vSphere client, it
                            indicates the the VM was still running on<br>
                            > Cluster2<br>
                            > 8) Try to confirm the resize, confirm
                            will be failed. The root cause is<br>
                            > that nova compute on host2 has no
                            knowledge of domain-c12(Cluster1)<br>
                            ><br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >
                            "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
                            line 2810, in<br>
                            > do_confirm_resize<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            > migration=migration)<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >
                            "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
                            line 2836, in<br>
                            > _confirm_resize<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            > network_info)<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 420,<br>
                            > in confirm_migration<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            > _vmops =
                            self._get_vmops_for_compute_node(instance['node'])<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 523,<br>
                            > in _get_vmops_for_compute_node<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            > resource =
                            self._get_resource_for_node(nodename)<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 515,<br>
                            > in _get_resource_for_node<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            > raise exception.NotFound(msg)<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            > NotFound: NV-3AB798A The resource
                            domain-c12(Cluster1) does not exist<br>
                            > 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            ><br>
                            ><br>
                            > *Scenario 2:*<br>
                            ><br>
                            > 1) Started two nova computes manage the
                            two clusters, but the two computes<br>
                            > have same nova conf.<br>
                            > 1) nova-compute1.conf<br>
                            > cluster_name=Cluster1<br>
                            > cluster_name=Cluster2<br>
                            ><br>
                            > 2) nova-compute2.conf<br>
                            > cluster_name=Cluster1<br>
                            > cluster_name=Cluster2<br>
                            ><br>
                            > 3) Then create and resize/cold migrate
                            a VM, it can always succeed.<br>
                            ><br>
                            ><br>
                            > *Questions:*<br>
                            > For multi-cluster management, does
                            vmware require all nova compute have<br>
                            > same cluster configuration to make sure
                            resize/cold migration can succeed?<br>
                            ><br>
                            > --<br>
                            > Thanks,<br>
                            ><br>
                            > Jay<br>
                            <div>><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              ><br>
                              ><br>
                              <br>
                              <br>
                            </div>
                            --<br>
                            Thanks,<br>
                            <br>
                            Jay<br>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/a5a2ed40/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/a5a2ed40/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 4<br>
                            Date: Sun, 16 Feb 2014 10:27:41 -0500<br>
                            From: Mohammad Banikazemi <<a href="mailto:mb@us.ibm.com" target="_blank">mb@us.ibm.com</a>><br>
                            To: "OpenStack Development Mailing List
                            \(not for usage questions\)"<br>
                                    <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            Subject: [openstack-dev] [neutron][policy]
                            Using network services with<br>
                                    network policies<br>
                            Message-ID:<br>
                                    <<a href="mailto:OF456914EA.334156E1-ON85257C81.0051DB09-85257C81.0054EF2C@us.ibm.com" target="_blank">OF456914EA.334156E1-ON85257C81.0051DB09-85257C81.0054EF2C@us.ibm.com</a>><br>

                            Content-Type: text/plain; charset="us-ascii"<br>
                            <br>
                            <br>
                            During the last IRC call we started talking
                            about network services and how<br>
                            they can be integrated into the group Policy
                            framework.<br>
                            <br>
                            In particular, with the "redirect" action we
                            need to think how we can<br>
                            specify the network services we want to
                            redirect the traffic to/from. There<br>
                            has been a substantial work in the area of
                            service chaining and service<br>
                            insertion and in the last summit "advanced
                            service" in VMs were discussed.<br>
                            I think the first step for us is to find out
                            the status of those efforts<br>
                            and then see how we can use them. Here are a
                            few questions that come to<br>
                            mind.<br>
                            1- What is the status of service chaining,
                            service insertion and advanced<br>
                            services work?<br>
                            2- How could we use a service chain? Would
                            simply referring to it in the<br>
                            action be enough? Are there considerations
                            wrt creating a service chain<br>
                            and/or a service VM for use with the Group
                            Policy framework that need to be<br>
                            taken into account?<br>
                            <br>
                            Let's start the discussion on the ML before
                            taking it to the next call.<br>
                            <br>
                            Thanks,<br>
                            <br>
                            Mohammad<br>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/efff7427/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/efff7427/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 5<br>
                            Date: Sun, 16 Feb 2014 23:29:49 +0800<br>
                            From: Jay Lau <<a href="mailto:jay.lau.513@gmail.com" target="_blank">jay.lau.513@gmail.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [Nova][VMWare]
                            VMwareVCDriver related to<br>
                                    resize/cold migration<br>
                            Message-ID:<br>
                                    <<a href="mailto:CAFyztAFCc1NH5nz00Dii3dhL3AN8RjPLb3D65aFMRGfyQiJGKA@mail.gmail.com" target="_blank">CAFyztAFCc1NH5nz00Dii3dhL3AN8RjPLb3D65aFMRGfyQiJGKA@mail.gmail.com</a>><br>
                            Content-Type: text/plain;
                            charset="iso-8859-1"<br>
                            <br>
                            Hi Gary,<br>
                            <br>
                            One more question, when using VCDriver, I
                            can use it in the following two<br>
                            ways:<br>
                            1) start up many nova computes and those
                            nova computes manage same vcenter<br>
                            clusters.<br>
                            2) start up many nova computes and those
                            nova computes manage different<br>
                            vcenter clusters.<br>
                            <br>
                            Do we have some best practice for above two
                            scenarios or else can you<br>
                            please provide some best practise for
                            VCDriver? I did not get much info<br>
                            from admin guide.<br>
                            <br>
                            Thanks,<br>
                            <br>
                            Jay<br>
                            <br>
                            <br>
                            2014-02-16 23:01 GMT+08:00 Jay Lau <<a href="mailto:jay.lau.513@gmail.com" target="_blank">jay.lau.513@gmail.com</a>>:<br>
                            <br>
                            > Thanks Gary, clear now. ;-)<br>
                            ><br>
                            ><br>
                            > 2014-02-16 21:40 GMT+08:00 Gary Kotton
                            <<a href="mailto:gkotton@vmware.com" target="_blank">gkotton@vmware.com</a>>:<br>
                            ><br>
                            >> Hi,<br>
                            >> There are two issues here.<br>
                            >> The first is a bug fix that is in
                            review:<br>
                            >> - <a href="https://review.openstack.org/#/c/69209/" target="_blank">https://review.openstack.org/#/c/69209/</a>
                            (this is where they have the<br>
                            >> same configuration)<br>
                            >> The second is WIP:<br>
                            >> - <a href="https://review.openstack.org/#/c/69262/" target="_blank">https://review.openstack.org/#/c/69262/</a>
                            (we need to restore)<br>
                            >> Thanks<br>
                            >> Gary<br>
                            >><br>
                            >> From: Jay Lau <<a href="mailto:jay.lau.513@gmail.com" target="_blank">jay.lau.513@gmail.com</a>><br>
                            >> Reply-To: "OpenStack Development
                            Mailing List (not for usage questions)"<br>
                            >> <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            >> Date: Sunday, February 16, 2014
                            6:39 AM<br>
                            >> To: OpenStack Development Mailing
                            List <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a><br>
                            >> ><br>
                            >> Subject: [openstack-dev]
                            [Nova][VMWare] VMwareVCDriver related to<br>
                            >> resize/cold migration<br>
                            >><br>
                            >> Hey,<br>
                            >><br>
                            >> I have one question related with
                            OpenStack vmwareapi.VMwareVCDriver<br>
                            >> resize/cold migration.<br>
                            >><br>
                            >> The following is my configuration:<br>
                            >><br>
                            >>  DC<br>
                            >>     |<br>
                            >>     |----Cluster1<br>
                            >>     |          |<br>
                            >>     |          |----9.111.249.56<br>
                            >>     |<br>
                            >>     |----Cluster2<br>
                            >>                |<br>
                            >>                |----9.111.249.49<br>
                            >><br>
                            >> *Scenario 1:*<br>
                            >> I started two nova computes manage
                            the two clusters:<br>
                            >> 1) nova-compute1.conf<br>
                            >> cluster_name=Cluster1<br>
                            >><br>
                            >> 2) nova-compute2.conf<br>
                            >> cluster_name=Cluster2<br>
                            >><br>
                            >> 3) Start up two nova computes on
                            host1 and host2 separately<br>
                            >> 4) Create one VM instance and the
                            VM instance was booted on Cluster2<br>
                            >> node  9.111.249.49<br>
                            >> | OS-EXT-SRV-ATTR:host            
                                | host2 |<br>
                            >> |
                            OS-EXT-SRV-ATTR:hypervisor_hostname  |<br>
                            >> domain-c16(Cluster2)              
                                                  |<br>
                            >> 5) Cold migrate the VM instance<br>
                            >> 6) After migration finished, the VM
                            goes to VERIFY_RESIZE status, and<br>
                            >> "nova show" indicates that the VM
                            now located on host1:Cluster1<br>
                            >> | OS-EXT-SRV-ATTR:host            
                                | host1 |<br>
                            >> |
                            OS-EXT-SRV-ATTR:hypervisor_hostname  |<br>
                            >> domain-c12(Cluster1)              
                                                  |<br>
                            >> 7) But from vSphere client, it
                            indicates the the VM was still running on<br>
                            >> Cluster2<br>
                            >> 8) Try to confirm the resize,
                            confirm will be failed. The root cause is<br>
                            >> that nova compute on host2 has no
                            knowledge of domain-c12(Cluster1)<br>
                            >><br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >>
                            "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
                            line 2810, in<br>
                            >> do_confirm_resize<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            >> migration=migration)<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >>
                            "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
                            line 2836, in<br>
                            >> _confirm_resize<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            >> network_info)<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >>
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 420,<br>
                            >> in confirm_migration<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            >> _vmops =
                            self._get_vmops_for_compute_node(instance['node'])<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >>
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 523,<br>
                            >> in _get_vmops_for_compute_node<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            >> resource =
                            self._get_resource_for_node(nodename)<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp   File<br>
                            >>
                            "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
                            line 515,<br>
                            >> in _get_resource_for_node<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            >> raise exception.NotFound(msg)<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            >> NotFound: NV-3AB798A The resource
                            domain-c12(Cluster1) does not exist<br>
                            >> 2014-02-16 07:10:17.166 12720 TRACE
                            nova.openstack.common.rpc.amqp<br>
                            >><br>
                            >><br>
                            >> *Scenario 2:*<br>
                            >><br>
                            >> 1) Started two nova computes manage
                            the two clusters, but the two<br>
                            >> computes have same nova conf.<br>
                            >> 1) nova-compute1.conf<br>
                            >> cluster_name=Cluster1<br>
                            >> cluster_name=Cluster2<br>
                            >><br>
                            >> 2) nova-compute2.conf<br>
                            >> cluster_name=Cluster1<br>
                            >> cluster_name=Cluster2<br>
                            >><br>
                            >> 3) Then create and resize/cold
                            migrate a VM, it can always succeed.<br>
                            >><br>
                            >><br>
                            >> *Questions:*<br>
                            >> For multi-cluster management, does
                            vmware require all nova compute have<br>
                            >> same cluster configuration to make
                            sure resize/cold migration can succeed?<br>
                            >><br>
                            >> --<br>
                            >> Thanks,<br>
                            >><br>
                            >> Jay<br>
                            <div>>><br>
                              >>
                              _______________________________________________<br>
                              >> OpenStack-dev mailing list<br>
                              >> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >><br>
                              >><br>
                              ><br>
                              ><br>
                            </div>
                            > --<br>
                            > Thanks,<br>
                            ><br>
                            > Jay<br>
                            ><br>
                            <br>
                            <br>
                            <br>
                            --<br>
                            Thanks,<br>
                            <br>
                            Jay<br>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/e7da9e73/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/e7da9e73/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 6<br>
                            Date: Sun, 16 Feb 2014 08:01:14 -0800<br>
                            From: Harshad Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [keystone] role
                            of Domain in VPC<br>
                                    definition<br>
                            Message-ID:
                            <-4426752061342328447@unknownmsgid><br>
                            Content-Type: text/plain;
                            charset="iso-8859-1"<br>
                            <br>
                            Yes, [1] can be done without [2] and [3].<br>
                            As you are well aware [2] is now merged with
                            group policy discussions.<br>
                            IMHO all or nothing approach will not get us
                            anywhere.<br>
                            By the time we line up all our ducks in row.
                            New features/ideas/blueprints<br>
                            will keep Emerging.<br>
                            <br>
                            Regards<br>
                            -Harshad<br>
                            <br>
                            <br>
                            On Feb 16, 2014, at 2:30 AM, Salvatore
                            Orlando <<a href="mailto:sorlando@nicira.com" target="_blank">sorlando@nicira.com</a>>


                            wrote:<br>
                            <br>
                            It seems this work item is made of several
                            blueprints, some of which are<br>
                            not yet approved. This is true at least for
                            the Neutron blueprint regarding<br>
                            policy extensions.<br>
                            <br>
                            Since I first looked at this spec I've been
                            wondering why nova has been<br>
                            selected as an endpoint for network
                            operations rather than Neutron, but<br>
                            this probably a design/implementation
                            details whereas JC here is looking at<br>
                            the general approach.<br>
                            <br>
                            Nevertheless, my only point here is that is
                            seems that features like this<br>
                            need an "all-or-none" approval.<br>
                            For instance, could the VPC feature be
                            considered functional if blueprint<br>
                            [1] is implemented, but not [2] and [3]?<br>
                            <br>
                            Salvatore<br>
                            <br>
                            [1] <a href="https://blueprints.launchpad.net/nova/+spec/aws-vpc-support" target="_blank">https://blueprints.launchpad.net/nova/+spec/aws-vpc-support</a><br>
                            [2]<br>
                            <a href="https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron" target="_blank">https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron</a><br>
                            [3]<br>
                            <a href="https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy" target="_blank">https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy</a><br>
                            <br>
                            <br>
                            On 11 February 2014 21:45, Martin, JC <<a href="mailto:jch.martin@gmail.com" target="_blank">jch.martin@gmail.com</a>>


                            wrote:<br>
                            <br>
                            > Ravi,<br>
                            ><br>
                            > It seems that the following Blueprint<br>
                            > <a href="https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support" target="_blank">https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support</a><br>
                            ><br>
                            > has been approved.<br>
                            ><br>
                            > However, I cannot find a discussion
                            with regard to the merit of using<br>
                            > project vs. domain, or other mechanism
                            for the implementation.<br>
                            ><br>
                            > I have an issue with this approach as
                            it prevents tenants within the same<br>
                            > domain sharing the same VPC to have
                            projects.<br>
                            ><br>
                            > As an example, if you are a large
                            organization on AWS, it is likely that<br>
                            > you have a large VPC that will be shred
                            by multiple projects. With this<br>
                            > proposal, we loose that capability,
                            unless I missed something.<br>
                            ><br>
                            > JC<br>
                            ><br>
                            > On Dec 19, 2013, at 6:10 PM, Ravi
                            Chunduru <<a href="mailto:ravivsn@gmail.com" target="_blank">ravivsn@gmail.com</a>>


                            wrote:<br>
                            ><br>
                            > > Hi,<br>
                            > >   We had some internal discussions
                            on role of Domain and VPCs. I would<br>
                            > like to expand and understand community
                            thinking of Keystone domain and<br>
                            > VPCs.<br>
                            > ><br>
                            > > Is VPC equivalent to Keystone
                            Domain?<br>
                            > ><br>
                            > > If so, as a public cloud provider
                            - I create a Keystone domain and give<br>
                            > it to an organization which wants a
                            virtual private cloud.<br>
                            > ><br>
                            > > Now the question is if that
                            organization wants to have  departments wise<br>
                            > allocation of resources it is becoming
                            difficult to visualize with existing<br>
                            > v3 keystone constructs.<br>
                            > ><br>
                            > > Currently, it looks like each
                            department of an organization cannot have<br>
                            > their own resource management with in
                            the organization VPC ( LDAP based<br>
                            > user management, network management or
                            dedicating computes etc.,) For us,<br>
                            > Openstack Project does not match the
                            requirements of a department of an<br>
                            > organization.<br>
                            > ><br>
                            > > I hope you guessed what we wanted
                            - Domain must have VPCs and VPC to<br>
                            > have projects.<br>
                            > ><br>
                            > > I would like to know how community
                            see the VPC model in Openstack.<br>
                            > ><br>
                            > > Thanks,<br>
                            > > -Ravi.<br>
                            <div>> ><br>
                              > ><br>
                              > >
                              _______________________________________________<br>
                              > > OpenStack-dev mailing list<br>
                              > > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              ><br>
                              ><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              ><br>
                              <br>
_______________________________________________<br>
                              OpenStack-dev mailing list<br>
                              <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                            </div>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/9258cf27/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/9258cf27/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 7<br>
                            Date: Sun, 16 Feb 2014 08:47:19 -0800<br>
                            From: Harshad Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] VPC Proposal<br>
                            Message-ID:<br>
                                    <<a href="mailto:CAL7PBMchfaSkX8amUAEe8X_fs9OM6ZLGJx_fNB2SUCJWPaGNFA@mail.gmail.com" target="_blank">CAL7PBMchfaSkX8amUAEe8X_fs9OM6ZLGJx_fNB2SUCJWPaGNFA@mail.gmail.com</a>><br>
                            Content-Type: text/plain;
                            charset="iso-8859-1"<br>
                            <br>
                            Comments Inline<br>
                            <br>
                            Regards<br>
                            -Harshad<br>
                            <br>
                            <br>
                            On Sat, Feb 15, 2014 at 11:39 PM,
                            Allamaraju, Subbu <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>>


                            wrote:<br>
                            <br>
                            > Harshad,<br>
                            ><br>
                            > Curious to know if there is a broad
                            interest in an AWS compatible API in<br>
                            > the community?<br>
                            <br>
                            <br>
                            We started looking at this as some our
                            customers/partners were interested<br>
                            in get AWS API compatibility. We have this
                            blueprint and code review<br>
                            pending for long time now. We will know
                            based on this thread wether the<br>
                            community is interested. But I assumed that
                            community was interested as the<br>
                            blueprint was approved and code review has
                            no -1(s) for long time now.<br>
                            <br>
                            <br>
                            > To clarify, a clear incremental path
                            from an AWS compatible API to an<br>
                            > OpenStack model is not clear.<br>
                            ><br>
                            <br>
                            In my mind AWS compatible API does not need
                            new openstack model. As more<br>
                            discussion happen on JC's proposal and
                            implementation becomes clear we will<br>
                            know how incremental is the path. But at
                            high level there two major<br>
                            differences<br>
                            1. New first class object will be introduced
                            which effect all components<br>
                            2. more than one project can be supported
                            within VPC.<br>
                            But it does not change AWS API(s). So even
                            in JC(s) model if you want AWS<br>
                            API then we will have to keep VPC to project
                            mapping 1:1, since the API<br>
                            will not take both VPC ID and project ID.<br>
                            <br>
                            As more users want to migrate from AWS or
                            IaaS providers who want compete<br>
                            with AWS should be interested in this
                            compatibility.<br>
                            <br>
                            There also seems to be terminology issue
                            here Whats is definition of "VPC"<br>
                            if we assume what AWS implements is "VPC"<br>
                            then what JC is proposing "VOS" or "VDC"
                            (virtual openstack or virtual DC)<br>
                            as all or most of current openstack features
                            are available to user in  this<br>
                            new Abstraction. I actually like this new
                            abstraction.<br>
                            <br>
                            <br>
                            > Subbu<br>
                            ><br>
                            > On Feb 15, 2014, at 10:04 PM, Harshad
                            Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            > wrote:<br>
                            ><br>
                            > ><br>
                            > > I agree with problem as defined by
                            you and will require more fundamental<br>
                            > changes.<br>
                            > > Meanwhile many users will benefit
                            from AWS VPC api compatibility.<br>
                            <div>><br>
                              ><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              ><br>
                            </div>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/5f655f01/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/5f655f01/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 8<br>
                            Date: Sun, 16 Feb 2014 09:04:36 -0800<br>
                            From: "Allamaraju, Subbu" <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>><br>
                            To: Harshad Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            Cc: "OpenStack Development Mailing List
                            \(not for usage questions\)"<br>
                                    <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            Subject: Re: [openstack-dev] VPC Proposal<br>
                            Message-ID: <<a href="mailto:641D4BA6-DFB2-4D3E-8D67-48F711ADC1B5@subbu.org" target="_blank">641D4BA6-DFB2-4D3E-8D67-48F711ADC1B5@subbu.org</a>><br>
                            Content-Type: text/plain; charset=iso-8859-1<br>
                            <br>
                            Harshad,<br>
                            <br>
                            Thanks for clarifying.<br>
                            <br>
                            > We started looking at this as some our
                            customers/partners were interested in get
                            AWS API compatibility. We have this
                            blueprint and code review pending for long
                            time now. We will know based on this thread
                            wether the community is interested. But I
                            assumed that community was interested as the
                            blueprint was approved and code review has
                            no -1(s) for long time now.<br>
                            <br>
                            Makes sense. I would leave it to others on
                            this list to chime in if there is sufficient
                            interest or not.<br>
                            <br>
                            > To clarify, a clear incremental path
                            from an AWS compatible API to an OpenStack
                            model is not clear.<br>
                            ><br>
                            > In my mind AWS compatible API does not
                            need new openstack model. As more discussion
                            happen on JC's proposal and implementation
                            becomes clear we will know how incremental
                            is the path. But at high level there two
                            major differences<br>
                            > 1. New first class object will be
                            introduced which effect all components<br>
                            > 2. more than one project can be
                            supported within VPC.<br>
                            > But it does not change AWS API(s). So
                            even in JC(s) model if you want AWS API then
                            we will have to keep VPC to project mapping
                            1:1, since the API will not take both VPC ID
                            and project ID.<br>
                            ><br>
                            > As more users want to migrate from AWS
                            or IaaS providers who want compete with AWS
                            should be interested in this compatibility.<br>
                            <br>
                            IMHO that's a tough sell. Though an AWS
                            compatible API does not need an OpenStack
                            abstraction, we would end up with two
                            independent ways of doing similar things.
                            That would OpenStack repeating itself!<br>
                            <br>
                            Subbu<br>
                            <br>
                            <br>
                            <br>
                            <br>
                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 9<br>
                            Date: Sun, 16 Feb 2014 09:12:54 -0800<br>
                            From: Harshad Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            To: "Allamaraju, Subbu" <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>><br>
                            Cc: "OpenStack Development Mailing List
                            \(not for usage questions\)"<br>
                                    <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            Subject: Re: [openstack-dev] VPC Proposal<br>
                            Message-ID:
                            <516707826958554641@unknownmsgid><br>
                            Content-Type: text/plain; charset=ISO-8859-1<br>
                            <br>
                            IMHO I don't see two implementations. Since
                            right now we have only<br>
                            one. As a community if we decide to add new
                            abstractions then we will<br>
                            have to change software in every component
                            where the new abstraction<br>
                            makes difference. That's normal software
                            development process.<br>
                            Regards<br>
                            -Harshad<br>
                            <br>
                            <br>
                            > On Feb 16, 2014, at 9:03 AM,
                            "Allamaraju, Subbu" <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>>


                            wrote:<br>
                            ><br>
                            > Harshad,<br>
                            ><br>
                            > Thanks for clarifying.<br>
                            ><br>
                            >> We started looking at this as some
                            our customers/partners were interested in
                            get AWS API compatibility. We have this
                            blueprint and code review pending for long
                            time now. We will know based on this thread
                            wether the community is interested. But I
                            assumed that community was interested as the
                            blueprint was approved and code review has
                            no -1(s) for long time now.<br>
                            ><br>
                            > Makes sense. I would leave it to others
                            on this list to chime in if there is
                            sufficient interest or not.<br>
                            ><br>
                            >> To clarify, a clear incremental
                            path from an AWS compatible API to an
                            OpenStack model is not clear.<br>
                            >><br>
                            >> In my mind AWS compatible API does
                            not need new openstack model. As more
                            discussion happen on JC's proposal and
                            implementation becomes clear we will know
                            how incremental is the path. But at high
                            level there two major differences<br>
                            >> 1. New first class object will be
                            introduced which effect all components<br>
                            >> 2. more than one project can be
                            supported within VPC.<br>
                            >> But it does not change AWS API(s).
                            So even in JC(s) model if you want AWS API
                            then we will have to keep VPC to project
                            mapping 1:1, since the API will not take
                            both VPC ID and project ID.<br>
                            >><br>
                            >> As more users want to migrate from
                            AWS or IaaS providers who want compete with
                            AWS should be interested in this
                            compatibility.<br>
                            ><br>
                            > IMHO that's a tough sell. Though an AWS
                            compatible API does not need an OpenStack
                            abstraction, we would end up with two
                            independent ways of doing similar things.
                            That would OpenStack repeating itself!<br>
                            ><br>
                            > Subbu<br>
                            ><br>
                            ><br>
                            <br>
                            <br>
                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 10<br>
                            Date: Sun, 16 Feb 2014 09:25:02 -0800<br>
                            From: "Martin, JC" <<a href="mailto:jch.martin@gmail.com" target="_blank">jch.martin@gmail.com</a>><br>
                            To: "OpenStack Development Mailing List
                            \(not for usage questions\)"<br>
                                    <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            Subject: Re: [openstack-dev] VPC Proposal<br>
                            Message-ID: <<a href="mailto:B1A58385-DC10-48EF-AA8E-90176F576A40@gmail.com" target="_blank">B1A58385-DC10-48EF-AA8E-90176F576A40@gmail.com</a>><br>
                            Content-Type: text/plain; charset=us-ascii<br>
                            <br>
                            Harshad,<br>
                            <br>
                            I tried to find some discussion around this
                            blueprint.<br>
                            Could you provide us with some notes or
                            threads  ?<br>
                            <br>
                            Also, about the code review you mention.
                            which one are you talking about :<br>
                            <a href="https://review.openstack.org/#/c/40071/" target="_blank">https://review.openstack.org/#/c/40071/</a><br>
                            <a href="https://review.openstack.org/#/c/49470/" target="_blank">https://review.openstack.org/#/c/49470/</a><br>
                            <a href="https://review.openstack.org/#/c/53171" target="_blank">https://review.openstack.org/#/c/53171</a><br>
                            <br>
                            because they are all abandoned.<br>
                            <br>
                            Could you point me to the code, and update
                            the BP because it seems that the links are
                            not correct.<br>
                            <br>
                            Thanks,<br>
                            <br>
                            JC<br>
                            On Feb 16, 2014, at 9:04 AM, "Allamaraju,
                            Subbu" <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>>


                            wrote:<br>
                            <br>
                            > Harshad,<br>
                            ><br>
                            > Thanks for clarifying.<br>
                            ><br>
                            >> We started looking at this as some
                            our customers/partners were interested in
                            get AWS API compatibility. We have this
                            blueprint and code review pending for long
                            time now. We will know based on this thread
                            wether the community is interested. But I
                            assumed that community was interested as the
                            blueprint was approved and code review has
                            no -1(s) for long time now.<br>
                            ><br>
                            > Makes sense. I would leave it to others
                            on this list to chime in if there is
                            sufficient interest or not.<br>
                            ><br>
                            >> To clarify, a clear incremental
                            path from an AWS compatible API to an
                            OpenStack model is not clear.<br>
                            >><br>
                            >> In my mind AWS compatible API does
                            not need new openstack model. As more
                            discussion happen on JC's proposal and
                            implementation becomes clear we will know
                            how incremental is the path. But at high
                            level there two major differences<br>
                            >> 1. New first class object will be
                            introduced which effect all components<br>
                            >> 2. more than one project can be
                            supported within VPC.<br>
                            >> But it does not change AWS API(s).
                            So even in JC(s) model if you want AWS API
                            then we will have to keep VPC to project
                            mapping 1:1, since the API will not take
                            both VPC ID and project ID.<br>
                            >><br>
                            >> As more users want to migrate from
                            AWS or IaaS providers who want compete with
                            AWS should be interested in this
                            compatibility.<br>
                            ><br>
                            > IMHO that's a tough sell. Though an AWS
                            compatible API does not need an OpenStack
                            abstraction, we would end up with two
                            independent ways of doing similar things.
                            That would OpenStack repeating itself!<br>
                            ><br>
                            > Subbu<br>
                            <div>><br>
                              ><br>
                              ><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              <br>
                              <br>
                              <br>
                              <br>
                            </div>
                            ------------------------------<br>
                            <br>
                            Message: 11<br>
                            Date: Sun, 16 Feb 2014 09:49:17 -0800<br>
                            From: "Allamaraju, Subbu" <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [keystone] role
                            of Domain in VPC<br>
                                    definition<br>
                            Message-ID: <<a href="mailto:1756EFC4-ABAF-4377-B44A-219F34C3ABFA@subbu.org" target="_blank">1756EFC4-ABAF-4377-B44A-219F34C3ABFA@subbu.org</a>><br>
                            Content-Type: text/plain; charset=iso-8859-1<br>
                            <br>
                            Harshad,<br>
                            <br>
                            But the key question that Ravi brought up
                            remains though. A project is a very small
                            administrative container to manage policies
                            and resources for VPCs. We've been
                            experimenting with VPCs on OpenStack (with
                            some mods) at work for nearly a year, and
                            came across cases where hundreds/thousands
                            of apps in equal number of projects needing
                            to share resources and policies, and project
                            to VPC mapping did not cut.<br>
                            <br>
                            I was wondering if there was prior
                            discussion around the mapping of AWS VPC
                            model to OpenStack concepts like projects
                            and domains. Thanks for any pointers.<br>
                            <br>
                            Subbu<br>
                            <br>
                            On Feb 16, 2014, at 8:01 AM, Harshad Nakil
                            <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>>


                            wrote:<br>
                            <br>
                            > Yes, [1] can be done without [2] and
                            [3].<br>
                            > As you are well aware [2] is now merged
                            with group policy discussions.<br>
                            > IMHO all or nothing approach will not
                            get us anywhere.<br>
                            > By the time we line up all our ducks in
                            row. New features/ideas/blueprints will keep
                            Emerging.<br>
                            ><br>
                            > Regards<br>
                            > -Harshad<br>
                            ><br>
                            ><br>
                            > On Feb 16, 2014, at 2:30 AM, Salvatore
                            Orlando <<a href="mailto:sorlando@nicira.com" target="_blank">sorlando@nicira.com</a>>


                            wrote:<br>
                            ><br>
                            >> It seems this work item is made of
                            several blueprints, some of which are not
                            yet approved. This is true at least for the
                            Neutron blueprint regarding policy
                            extensions.<br>
                            >><br>
                            >> Since I first looked at this spec
                            I've been wondering why nova has been
                            selected as an endpoint for network
                            operations rather than Neutron, but this
                            probably a design/implementation details
                            whereas JC here is looking at the general
                            approach.<br>
                            >><br>
                            >> Nevertheless, my only point here is
                            that is seems that features like this need
                            an "all-or-none" approval.<br>
                            >> For instance, could the VPC feature
                            be considered functional if blueprint [1] is
                            implemented, but not [2] and [3]?<br>
                            >><br>
                            >> Salvatore<br>
                            >><br>
                            >> [1] <a href="https://blueprints.launchpad.net/nova/+spec/aws-vpc-support" target="_blank">https://blueprints.launchpad.net/nova/+spec/aws-vpc-support</a><br>
                            >> [2] <a href="https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron" target="_blank">https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron</a><br>

                            >> [3] <a href="https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy" target="_blank">https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy</a><br>

                            >><br>
                            >><br>
                            >> On 11 February 2014 21:45, Martin,
                            JC <<a href="mailto:jch.martin@gmail.com" target="_blank">jch.martin@gmail.com</a>>


                            wrote:<br>
                            >> Ravi,<br>
                            >><br>
                            >> It seems that the following
                            Blueprint<br>
                            >> <a href="https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support" target="_blank">https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support</a><br>
                            >><br>
                            >> has been approved.<br>
                            >><br>
                            >> However, I cannot find a discussion
                            with regard to the merit of using project
                            vs. domain, or other mechanism for the
                            implementation.<br>
                            >><br>
                            >> I have an issue with this approach
                            as it prevents tenants within the same
                            domain sharing the same VPC to have
                            projects.<br>
                            >><br>
                            >> As an example, if you are a large
                            organization on AWS, it is likely that you
                            have a large VPC that will be shred by
                            multiple projects. With this proposal, we
                            loose that capability, unless I missed
                            something.<br>
                            >><br>
                            >> JC<br>
                            >><br>
                            >> On Dec 19, 2013, at 6:10 PM, Ravi
                            Chunduru <<a href="mailto:ravivsn@gmail.com" target="_blank">ravivsn@gmail.com</a>>


                            wrote:<br>
                            >><br>
                            >> > Hi,<br>
                            >> >   We had some internal
                            discussions on role of Domain and VPCs. I
                            would like to expand and understand
                            community thinking of Keystone domain and
                            VPCs.<br>
                            >> ><br>
                            >> > Is VPC equivalent to Keystone
                            Domain?<br>
                            >> ><br>
                            >> > If so, as a public cloud
                            provider - I create a Keystone domain and
                            give it to an organization which wants a
                            virtual private cloud.<br>
                            >> ><br>
                            >> > Now the question is if that
                            organization wants to have  departments wise
                            allocation of resources it is becoming
                            difficult to visualize with existing v3
                            keystone constructs.<br>
                            >> ><br>
                            >> > Currently, it looks like each
                            department of an organization cannot have
                            their own resource management with in the
                            organization VPC ( LDAP based user
                            management, network management or dedicating
                            computes etc.,) For us, Openstack Project
                            does not match the requirements of a
                            department of an organization.<br>
                            >> ><br>
                            >> > I hope you guessed what we
                            wanted - Domain must have VPCs and VPC to
                            have projects.<br>
                            >> ><br>
                            >> > I would like to know how
                            community see the VPC model in Openstack.<br>
                            >> ><br>
                            >> > Thanks,<br>
                            >> > -Ravi.<br>
                            <div>>> ><br>
                              >> ><br>
                              >> >
                              _______________________________________________<br>
                              >> > OpenStack-dev mailing list<br>
                              >> > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >><br>
                              >><br>
                              >>
                              _______________________________________________<br>
                              >> OpenStack-dev mailing list<br>
                              >> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >><br>
                              >>
                              _______________________________________________<br>
                              >> OpenStack-dev mailing list<br>
                              >> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              <br>
                              <br>
                              <br>
                              <br>
                            </div>
                            ------------------------------<br>
                            <br>
                            Message: 12<br>
                            Date: Sun, 16 Feb 2014 10:15:11 -0800<br>
                            From: Harshad Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [keystone] role
                            of Domain in VPC<br>
                                    definition<br>
                            Message-ID:
                            <4920517322402852354@unknownmsgid><br>
                            Content-Type: text/plain; charset=ISO-8859-1<br>
                            <br>
                            As said I am not disagreeing with you or
                            Ravi or JC. I also agree that<br>
                            Openstack VPC implementation will benefit
                            from these proposals.<br>
                            What I am saying is it is not required AWS
                            VPC API compatibility at<br>
                            this point.  Which is what our blueprint is
                            all about. We are not<br>
                            defining THE "VPC".<br>
                            Let me ask you what changes in AWS API when
                            you go to other model?<br>
                            The argument is you want multiple projects
                            in VPC. That's great. But I<br>
                            don't understand how I would specify it if
                            my code was written to use<br>
                            AWS API.<br>
                            The argument you want multiple external
                            networks per VPC I don't know<br>
                            how to specify using AWS API<br>
                            So list goes on.<br>
                            <br>
                            May be I am missing something. If you don't
                            want AWS compatibility<br>
                            then that's different issue all together.
                            And should be discussed as<br>
                            such.<br>
                            <br>
                            Regards<br>
                            -Harshad<br>
                            <br>
                            <br>
                            > On Feb 16, 2014, at 9:51 AM,
                            "Allamaraju, Subbu" <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>>


                            wrote:<br>
                            ><br>
                            > Harshad,<br>
                            ><br>
                            > But the key question that Ravi brought
                            up remains though. A project is a very small
                            administrative container to manage policies
                            and resources for VPCs. We've been
                            experimenting with VPCs on OpenStack (with
                            some mods) at work for nearly a year, and
                            came across cases where hundreds/thousands
                            of apps in equal number of projects needing
                            to share resources and policies, and project
                            to VPC mapping did not cut.<br>
                            ><br>
                            > I was wondering if there was prior
                            discussion around the mapping of AWS VPC
                            model to OpenStack concepts like projects
                            and domains. Thanks for any pointers.<br>
                            ><br>
                            > Subbu<br>
                            ><br>
                            >> On Feb 16, 2014, at 8:01 AM,
                            Harshad Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>>


                            wrote:<br>
                            >><br>
                            >> Yes, [1] can be done without [2]
                            and [3].<br>
                            >> As you are well aware [2] is now
                            merged with group policy discussions.<br>
                            >> IMHO all or nothing approach will
                            not get us anywhere.<br>
                            >> By the time we line up all our
                            ducks in row. New features/ideas/blueprints
                            will keep Emerging.<br>
                            >><br>
                            >> Regards<br>
                            >> -Harshad<br>
                            >><br>
                            >><br>
                            >>> On Feb 16, 2014, at 2:30 AM,
                            Salvatore Orlando <<a href="mailto:sorlando@nicira.com" target="_blank">sorlando@nicira.com</a>>


                            wrote:<br>
                            >>><br>
                            >>> It seems this work item is made
                            of several blueprints, some of which are not
                            yet approved. This is true at least for the
                            Neutron blueprint regarding policy
                            extensions.<br>
                            >>><br>
                            >>> Since I first looked at this
                            spec I've been wondering why nova has been
                            selected as an endpoint for network
                            operations rather than Neutron, but this
                            probably a design/implementation details
                            whereas JC here is looking at the general
                            approach.<br>
                            >>><br>
                            >>> Nevertheless, my only point
                            here is that is seems that features like
                            this need an "all-or-none" approval.<br>
                            >>> For instance, could the VPC
                            feature be considered functional if
                            blueprint [1] is implemented, but not [2]
                            and [3]?<br>
                            >>><br>
                            >>> Salvatore<br>
                            >>><br>
                            >>> [1] <a href="https://blueprints.launchpad.net/nova/+spec/aws-vpc-support" target="_blank">https://blueprints.launchpad.net/nova/+spec/aws-vpc-support</a><br>
                            >>> [2] <a href="https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron" target="_blank">https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron</a><br>

                            >>> [3] <a href="https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy" target="_blank">https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy</a><br>

                            >>><br>
                            >>><br>
                            >>> On 11 February 2014 21:45,
                            Martin, JC <<a href="mailto:jch.martin@gmail.com" target="_blank">jch.martin@gmail.com</a>>


                            wrote:<br>
                            >>> Ravi,<br>
                            >>><br>
                            >>> It seems that the following
                            Blueprint<br>
                            >>> <a href="https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support" target="_blank">https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support</a><br>
                            >>><br>
                            >>> has been approved.<br>
                            >>><br>
                            >>> However, I cannot find a
                            discussion with regard to the merit of using
                            project vs. domain, or other mechanism for
                            the implementation.<br>
                            >>><br>
                            >>> I have an issue with this
                            approach as it prevents tenants within the
                            same domain sharing the same VPC to have
                            projects.<br>
                            >>><br>
                            >>> As an example, if you are a
                            large organization on AWS, it is likely that
                            you have a large VPC that will be shred by
                            multiple projects. With this proposal, we
                            loose that capability, unless I missed
                            something.<br>
                            >>><br>
                            >>> JC<br>
                            >>><br>
                            >>>> On Dec 19, 2013, at 6:10
                            PM, Ravi Chunduru <<a href="mailto:ravivsn@gmail.com" target="_blank">ravivsn@gmail.com</a>>


                            wrote:<br>
                            >>>><br>
                            >>>> Hi,<br>
                            >>>>  We had some internal
                            discussions on role of Domain and VPCs. I
                            would like to expand and understand
                            community thinking of Keystone domain and
                            VPCs.<br>
                            >>>><br>
                            >>>> Is VPC equivalent to
                            Keystone Domain?<br>
                            >>>><br>
                            >>>> If so, as a public cloud
                            provider - I create a Keystone domain and
                            give it to an organization which wants a
                            virtual private cloud.<br>
                            >>>><br>
                            >>>> Now the question is if that
                            organization wants to have  departments wise
                            allocation of resources it is becoming
                            difficult to visualize with existing v3
                            keystone constructs.<br>
                            >>>><br>
                            >>>> Currently, it looks like
                            each department of an organization cannot
                            have their own resource management with in
                            the organization VPC ( LDAP based user
                            management, network management or dedicating
                            computes etc.,) For us, Openstack Project
                            does not match the requirements of a
                            department of an organization.<br>
                            >>>><br>
                            >>>> I hope you guessed what we
                            wanted - Domain must have VPCs and VPC to
                            have projects.<br>
                            >>>><br>
                            >>>> I would like to know how
                            community see the VPC model in Openstack.<br>
                            >>>><br>
                            >>>> Thanks,<br>
                            >>>> -Ravi.<br>
                            <div>>>>><br>
                              >>>><br>
                              >>>>
                              _______________________________________________<br>
                              >>>> OpenStack-dev mailing
                              list<br>
                              >>>> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >>><br>
                              >>><br>
                              >>>
                              _______________________________________________<br>
                              >>> OpenStack-dev mailing list<br>
                              >>> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >>><br>
                              >>>
                              _______________________________________________<br>
                              >>> OpenStack-dev mailing list<br>
                              >>> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >>
                              _______________________________________________<br>
                              >> OpenStack-dev mailing list<br>
                              >> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              ><br>
                              ><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              <br>
                              <br>
                              <br>
                            </div>
                            ------------------------------<br>
                            <br>
                            Message: 13<br>
                            Date: Sun, 16 Feb 2014 10:31:42 -0800<br>
                            From: "Allamaraju, Subbu" <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [keystone] role
                            of Domain in VPC<br>
                                    definition<br>
                            Message-ID: <<a href="mailto:7CD9E46E-FC0A-431B-836F-9BD02B0E417A@subbu.org" target="_blank">7CD9E46E-FC0A-431B-836F-9BD02B0E417A@subbu.org</a>><br>
                            Content-Type: text/plain; charset=us-ascii<br>
                            <br>
                            Harshad,<br>
                            <br>
                            This is great. At least there is consensus
                            on what it is and what it is not. I would
                            leave it to others to discuss merits of a an
                            AWS compat VPC API for Icehouse.<br>
                            <br>
                            Perhaps this is a good topic to discuss at
                            the Juno design summit.<br>
                            <br>
                            Subbu<br>
                            <br>
                            On Feb 16, 2014, at 10:15 AM, Harshad Nakil
                            <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>>


                            wrote:<br>
                            <br>
                            > As said I am not disagreeing with you
                            or Ravi or JC. I also agree that<br>
                            > Openstack VPC implementation will
                            benefit from these proposals.<br>
                            > What I am saying is it is not required
                            AWS VPC API compatibility at<br>
                            > this point.  Which is what our
                            blueprint is all about. We are not<br>
                            > defining THE "VPC".<br>
                            <br>
                            <br>
                            <br>
                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 14<br>
                            Date: Mon, 17 Feb 2014 08:20:09 +1300<br>
                            From: Robert Collins <<a href="mailto:robertc@robertcollins.net" target="_blank">robertc@robertcollins.net</a>><br>
                            To: Sean Dague <<a href="mailto:sean@dague.net" target="_blank">sean@dague.net</a>><br>
                            Cc: "OpenStack Development Mailing List
                            \(not for usage questions\)"<br>
                                    <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>>,<br>
                                    "<<a href="mailto:openstack-infra@lists.openstack.org" target="_blank">openstack-infra@lists.openstack.org</a>>"<br>
                                    <<a href="mailto:openstack-infra@lists.openstack.org" target="_blank">openstack-infra@lists.openstack.org</a>><br>
                            Subject: Re: [openstack-dev]
                            [OpenStack-Infra] [TripleO] promoting<br>
                                    devtest_seed and devtest_undercloud
                            to voting, + experimental queue<br>
                                    for nova/neutron etc.<br>
                            Message-ID:<br>
                                   
                            <CAJ3HoZ1LC1WqayW3o3RaPxfLC0G-Lb9zxHKftPDW=<a href="mailto:t8wnubCtQ@mail.gmail.com" target="_blank">t8wnubCtQ@mail.gmail.com</a>><br>
                            Content-Type: text/plain; charset=ISO-8859-1<br>
                            <br>
                            On 15 February 2014 09:58, Sean Dague <<a href="mailto:sean@dague.net" target="_blank">sean@dague.net</a>>


                            wrote:<br>
                            <br>
                            >> Lastly, I'm going to propose a
                            merge to infra/config to put our<br>
                            >> undercloud story (which exercises
                            the seed's ability to deploy via<br>
                            >> heat with bare metal) as a check
                            experimental job on our dependencies<br>
                            >> (keystone, glance, nova, neutron) -
                            if thats ok with those projects?<br>
                            >><br>
                            >> -Rob<br>
                            >><br>
                            ><br>
                            > My biggest concern with adding this to
                            check experimental, is the<br>
                            > experimental results aren't published
                            back until all the experimental<br>
                            > jobs are done.<br>
                            <br>
                            If we add a new pipeline - <a href="https://review.openstack.org/#/c/73863/" target="_blank">https://review.openstack.org/#/c/73863/</a>
                            -<br>
                            then we can avoid that.<br>
                            <br>
                            > We've seen really substantial delays,
                            plus a 5 day complete outage a<br>
                            > week ago, on the tripleo cloud. I'd
                            like to see that much more proven<br>
                            > before it starts to impact core
                            projects, even in experimental.<br>
                            <br>
                            I believe that with a new pipeline it won't
                            impact core projects at all.<br>
                            <br>
                            The outage, FWIW, was because I deleted the
                            entire cloud, at the same<br>
                            time that we had a firedrill with some other
                            upstream-of-us issue (I<br>
                            forget the exact one). The multi-region
                            setup we're aiming for should<br>
                            mitigate that substantially :)<br>
                            <div><br>
                              <br>
                              -Rob<br>
                              <br>
                              <br>
                              --<br>
                              Robert Collins <<a href="mailto:rbtcollins@hp.com" target="_blank">rbtcollins@hp.com</a>><br>
                              Distinguished Technologist<br>
                              HP Converged Cloud<br>
                              <br>
                              <br>
                              <br>
                              ------------------------------<br>
                              <br>
                            </div>
                            Message: 15<br>
                            Date: Mon, 17 Feb 2014 08:25:04 +1300<br>
                            From: Robert Collins <<a href="mailto:robertc@robertcollins.net" target="_blank">robertc@robertcollins.net</a>><br>
                            To: "James E. Blair" <<a href="mailto:jeblair@openstack.org" target="_blank">jeblair@openstack.org</a>><br>
                            Cc: "OpenStack Development Mailing List
                            \(not for usage questions\)"<br>
                                    <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>>,<br>
                                    "<<a href="mailto:openstack-infra@lists.openstack.org" target="_blank">openstack-infra@lists.openstack.org</a>>"<br>
                                    <<a href="mailto:openstack-infra@lists.openstack.org" target="_blank">openstack-infra@lists.openstack.org</a>><br>
                            Subject: Re: [openstack-dev]
                            [OpenStack-Infra] [TripleO] promoting<br>
                                    devtest_seed and devtest_undercloud
                            to voting, + experimental queue<br>
                                    for nova/neutron etc.<br>
                            Message-ID:<br>
                                    <<a href="mailto:CAJ3HoZ0me0xfeGArVSqLkC0SPpJwaTeK%2BhNYoePDdh_2FR_K9w@mail.gmail.com" target="_blank">CAJ3HoZ0me0xfeGArVSqLkC0SPpJwaTeK+hNYoePDdh_2FR_K9w@mail.gmail.com</a>><br>

                            Content-Type: text/plain; charset=ISO-8859-1<br>
                            <br>
                            On 15 February 2014 12:21, James E. Blair
                            <<a href="mailto:jeblair@openstack.org" target="_blank">jeblair@openstack.org</a>>


                            wrote:<br>
                            <br>
                            > You won't end up with -1's everywhere,
                            you'll end up with jobs stuck in<br>
                            > the queue indefinitely, as we saw when
                            the tripleo cloud failed<br>
                            > recently.  What's worse is that now
                            that positive check results are<br>
                            > required for enqueuing into the gate,
                            you will also not be able to merge<br>
                            > anything.<br>
                            <br>
                            Ok. So the cost of voting [just in tripleo]
                            would be that a) [tripleo]<br>
                            infrastructure failures and b) breakage from
                            other projects - both<br>
                            things that can cause checks to fail, would
                            stall all tripleo landings<br>
                            until rectified, or until voting is turned
                            off via a change to config<br>
                            which makes this infra's problem.<br>
                            <br>
                            Hmm - so from a tripleo perspective, I think
                            we're ok with this -<br>
                            having a clear indication that 'this is ok'
                            is probably more important<br>
                            to us right now than the more opaque thing
                            we have now where we have<br>
                            to expand every jenkins comment to be sure.<br>
                            <br>
                            But- will infra be ok, if we end up having a
                            firedrill 'please make<br>
                            this nonvoting' change to propose?<br>
                            <div><br>
                              -Rob<br>
                              <br>
                              --<br>
                              Robert Collins <<a href="mailto:rbtcollins@hp.com" target="_blank">rbtcollins@hp.com</a>><br>
                              Distinguished Technologist<br>
                              HP Converged Cloud<br>
                              <br>
                              <br>
                              <br>
                              ------------------------------<br>
                              <br>
                            </div>
                            Message: 16<br>
                            Date: Sun, 16 Feb 2014 11:38:57 -0800<br>
                            From: Ravi Chunduru <<a href="mailto:ravivsn@gmail.com" target="_blank">ravivsn@gmail.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] [keystone] role
                            of Domain in VPC<br>
                                    definition<br>
                            Message-ID:<br>
                                   
                            <CAEgw6yuopjDfeF2vmAXtjiiA+Fz14=tbZcKV+m3eviLb=<a href="mailto:Xf5tQ@mail.gmail.com" target="_blank">Xf5tQ@mail.gmail.com</a>><br>
                            Content-Type: text/plain; charset="utf-8"<br>
                            <br>
                            I agree with JC that we need to pause and
                            discuss VPC model with in<br>
                            openstack before considering AWS
                            compatibility. As Subbu said, We need this<br>
                            discussion in Juno summit and get consensus.<br>
                            <br>
                            Thanks,<br>
                            -Ravi.<br>
                            <br>
                            <br>
                            On Sun, Feb 16, 2014 at 10:31 AM,
                            Allamaraju, Subbu <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>>


                            wrote:<br>
                            <br>
                            > Harshad,<br>
                            ><br>
                            > This is great. At least there is
                            consensus on what it is and what it is<br>
                            > not. I would leave it to others to
                            discuss merits of a an AWS compat VPC<br>
                            > API for Icehouse.<br>
                            ><br>
                            > Perhaps this is a good topic to discuss
                            at the Juno design summit.<br>
                            ><br>
                            > Subbu<br>
                            ><br>
                            > On Feb 16, 2014, at 10:15 AM, Harshad
                            Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            > wrote:<br>
                            ><br>
                            > > As said I am not disagreeing with
                            you or Ravi or JC. I also agree that<br>
                            > > Openstack VPC implementation will
                            benefit from these proposals.<br>
                            > > What I am saying is it is not
                            required AWS VPC API compatibility at<br>
                            > > this point.  Which is what our
                            blueprint is all about. We are not<br>
                            > > defining THE "VPC".<br>
                            <div>><br>
                              ><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              ><br>
                              <br>
                              <br>
                              <br>
                            </div>
                            --<br>
                            Ravi<br>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2ef6cc51/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2ef6cc51/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 17<br>
                            Date: Sun, 16 Feb 2014 11:54:54 -0800<br>
                            From: Ravi Chunduru <<a href="mailto:ravivsn@gmail.com" target="_blank">ravivsn@gmail.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] VPC Proposal<br>
                            Message-ID:<br>
                                    <<a href="mailto:CAEgw6ysbaY6-8w_VOme5mU1k29v0dy42mvuRkTsTR7XXKw6CMg@mail.gmail.com" target="_blank">CAEgw6ysbaY6-8w_VOme5mU1k29v0dy42mvuRkTsTR7XXKw6CMg@mail.gmail.com</a>><br>
                            Content-Type: text/plain; charset="utf-8"<br>
                            <br>
                            IMO, VPC means to have managed set of
                            resources not just limited to<br>
                            networks but also projects.<br>
                            I feel its not about incrementally starting
                            with AWS compatibility, But<br>
                            doing it right with AWS compatibility into
                            consideration.<br>
                            <br>
                            Thanks,<br>
                            -Ravi.<br>
                            <br>
                            <br>
                            On Sun, Feb 16, 2014 at 8:47 AM, Harshad
                            Nakil<br>
                            <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>>wrote:<br>
                            <br>
                            > Comments Inline<br>
                            ><br>
                            > Regards<br>
                            > -Harshad<br>
                            ><br>
                            ><br>
                            > On Sat, Feb 15, 2014 at 11:39 PM,
                            Allamaraju, Subbu <<a href="mailto:subbu@subbu.org" target="_blank">subbu@subbu.org</a>>wrote:<br>
                            ><br>
                            >> Harshad,<br>
                            >><br>
                            >> Curious to know if there is a broad
                            interest in an AWS compatible API in<br>
                            >> the community?<br>
                            ><br>
                            ><br>
                            > We started looking at this as some our
                            customers/partners were interested<br>
                            > in get AWS API compatibility. We have
                            this blueprint and code review<br>
                            > pending for long time now. We will know
                            based on this thread wether the<br>
                            > community is interested. But I assumed
                            that community was interested as the<br>
                            > blueprint was approved and code review
                            has no -1(s) for long time now.<br>
                            ><br>
                            ><br>
                            >> To clarify, a clear incremental
                            path from an AWS compatible API to an<br>
                            >> OpenStack model is not clear.<br>
                            >><br>
                            ><br>
                            > In my mind AWS compatible API does not
                            need new openstack model. As more<br>
                            > discussion happen on JC's proposal and
                            implementation becomes clear we will<br>
                            > know how incremental is the path. But
                            at high level there two major<br>
                            > differences<br>
                            > 1. New first class object will be
                            introduced which effect all components<br>
                            > 2. more than one project can be
                            supported within VPC.<br>
                            > But it does not change AWS API(s). So
                            even in JC(s) model if you want AWS<br>
                            > API then we will have to keep VPC to
                            project mapping 1:1, since the API<br>
                            > will not take both VPC ID and project
                            ID.<br>
                            ><br>
                            ><br>
                            <br>
                            <br>
                            <br>
                            > As more users want to migrate from AWS
                            or IaaS providers who want compete<br>
                            > with AWS should be interested in this
                            compatibility.<br>
                            ><br>
                            > There also seems to be terminology
                            issue here Whats is definition of "VPC"<br>
                            > if we assume what AWS implements is
                            "VPC"<br>
                            > then what JC is proposing "VOS" or
                            "VDC" (virtual openstack or virtual DC)<br>
                            > as all or most of current openstack
                            features are available to user in  this<br>
                            > new Abstraction. I actually like this
                            new abstraction.<br>
                            ><br>
                            ><br>
                            >> Subbu<br>
                            >><br>
                            >> On Feb 15, 2014, at 10:04 PM,
                            Harshad Nakil <<a href="mailto:hnakil@contrailsystems.com" target="_blank">hnakil@contrailsystems.com</a>><br>
                            >> wrote:<br>
                            >><br>
                            >> ><br>
                            >> > I agree with problem as
                            defined by you and will require more<br>
                            >> fundamental changes.<br>
                            >> > Meanwhile many users will
                            benefit from AWS VPC api compatibility.<br>
                            <div>>><br>
                              >><br>
                              >>
                              _______________________________________________<br>
                              >> OpenStack-dev mailing list<br>
                              >> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              >><br>
                              ><br>
                              ><br>
                              >
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              ><br>
                              ><br>
                              <br>
                              <br>
                            </div>
                            --<br>
                            Ravi<br>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/745d6d7d/attachment-0001.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/745d6d7d/attachment-0001.html</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 18<br>
                            Date: Sun, 16 Feb 2014 12:08:15 -0800<br>
                            From: Vishvananda Ishaya <<a href="mailto:vishvananda@gmail.com" target="_blank">vishvananda@gmail.com</a>><br>
                            <div>To: "OpenStack Development
                              Mailing List (not for usage questions)"<br>
                                      <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            Subject: Re: [openstack-dev] OpenStack-dev
                            Digest, Vol 22, Issue 39<br>
                            Message-ID: <<a href="mailto:91C14EC4-02F8-4DFC-9145-08BE2DA249AD@gmail.com" target="_blank">91C14EC4-02F8-4DFC-9145-08BE2DA249AD@gmail.com</a>><br>
                            Content-Type: text/plain;
                            charset="windows-1252"<br>
                            <br>
                            <br>
                            On Feb 15, 2014, at 4:36 AM, Vinod Kumar
                            Boppanna <<a href="mailto:vinod.kumar.boppanna@cern.ch" target="_blank">vinod.kumar.boppanna@cern.ch</a>>


                            wrote:<br>
                            <br>
                            ><br>
                            > Dear Vish,<br>
                            ><br>
                            > I completely agree with you. Its like a
                            trade off between getting re-authenticated
                            (when in a hierarchy user has different
                            roles at different levels) or parsing the
                            entire hierarchy till the leaf and include
                            all the roles the user has at each level in
                            the scope.<br>
                            ><br>
                            > I am ok with any one (both has some
                            advantages and dis-advantages).<br>
                            ><br>
                            > But one point i didn't understand why
                            should we parse the tree above the level
                            where the user gets authenticated (as you
                            specified in the reply). Like if user is
                            authenticated at level 3, then do we mean
                            that the roles at level 2 and level 1 also
                            should be passed?<br>
                            > Why this is needed? I only see either
                            we pass only the role at the level the user
                            is getting authenticated or pass the roles
                            at the level till the leaf starting from the
                            level the user is getting authenticated.<br>
                            <br>
                            <br>
                            This is needed because in my proposed model
                            roles are inherited down the heirarchy. That
                            means if you authenticate against
                            ProjA.ProjA2 and you have a role like
                            ?netadmin? in ProjA, you will also have it
                            in ProjA2. So it is necessary to walk up the
                            tree to find the full list of roles.<br>
                            <br>
                            Vish<br>
                            <br>
                            ><br>
                            > Regards,<br>
                            > Vinod Kumar Boppanna<br>
                            >
                            ________________________________________<br>
                            > Message: 21<br>
                            > Date: Fri, 14 Feb 2014 10:13:59 -0800<br>
                            > From: Vishvananda Ishaya <<a href="mailto:vishvananda@gmail.com" target="_blank">vishvananda@gmail.com</a>><br>
                            <div>> To: "OpenStack
                              Development Mailing List (not for usage
                              questions)"<br>
                              >        <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            </div>
                            <div>> Subject: Re:
                              [openstack-dev] Hierarchicical
                              Multitenancy Discussion<br>
                            </div>
                            > Message-ID: <<a href="mailto:4508B18F-458B-4A3E-BA66-22F9FA47EAC0@gmail.com" target="_blank">4508B18F-458B-4A3E-BA66-22F9FA47EAC0@gmail.com</a>><br>
                            > Content-Type: text/plain;
                            charset="windows-1252"<br>
                            ><br>
                            > Hi Vinod!<br>
                            ><br>
                            > I think you can simplify the roles in
                            the hierarchical model by only passing the
                            roles for the authenticated project and
                            above. All roles are then inherited down.
                            This means it isn?t necessary to pass a
                            scope along with each role. The scope is
                            just passed once with the token and the
                            project-admin role (for example) would be
                            checking to see that the user has the
                            project-admin role and that the project_id
                            prefix matches.<br>
                            ><br>
                            > There is only one case that this
                            doesn?t handle, and that is when the user
                            has one role (say member) in ProjA and
                            project-admin in ProjA2. If the user is
                            authenticated to ProjA, he can?t do
                            project-adminy stuff for ProjA2 without
                            reauthenticating. I think this is a
                            reasonable sacrifice considering how much
                            easier it would be to just pass the parent
                            roles instead of going through all of the
                            children.<br>
                            ><br>
                            > Vish<br>
                            ><br>
                            <div>>
                              _______________________________________________<br>
                              > OpenStack-dev mailing list<br>
                              > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              <br>
                            </div>
                            <div>-------------- next part
                              --------------<br>
                              A non-text attachment was scrubbed...<br>
                              Name: signature.asc<br>
                              Type: application/pgp-signature<br>
                            </div>
                            <div>Size: 455 bytes<br>
                              Desc: Message signed with OpenPGP using
                              GPGMail<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/7c320704/attachment-0001.pgp" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/7c320704/attachment-0001.pgp</a>><br>

                            <br>
                            ------------------------------<br>
                            <br>
                            Message: 19<br>
                            Date: Sun, 16 Feb 2014 16:20:52 -0500<br>
                            From: Mike Spreitzer <<a href="mailto:mspreitz@us.ibm.com" target="_blank">mspreitz@us.ibm.com</a>><br>
                            To: "OpenStack Development Mailing List
                            \(not for usage questions\)"<br>
                                    <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
                            Subject: Re: [openstack-dev] heat
                            run_tests.sh fails with one huge<br>
                                    line    of      output<br>
                            Message-ID:<br>
                                    <<a href="mailto:OF81356D12.13A4D038-ON85257C81.0073FA5A-85257C81.00754456@us.ibm.com" target="_blank">OF81356D12.13A4D038-ON85257C81.0073FA5A-85257C81.00754456@us.ibm.com</a>><br>

                            Content-Type: text/plain; charset="us-ascii"<br>
                            <br>
                            Kevin, I changed no code, it was a fresh
                            DevStack install.<br>
                            <br>
                            Robert Collins <<a href="mailto:robertc@robertcollins.net" target="_blank">robertc@robertcollins.net</a>>


                            wrote on 02/16/2014 05:33:59<br>
                            AM:<br>
                            > A) [fixed in testrepository trunk] the
                            output from subunit.run<br>
                            > discover .... --list is being shown
                            verbatim when an error happens,<br>
                            > rather than being machine processed and
                            the test listings elided.<br>
                            ><br>
                            > To use trunk - in your venv:<br>
                            > bzr branch lp:testrepository<br>
                            > pip install testrepository<br>
                            ><br>
                            > B) If you look at the end of that wall
                            of text you'll see 'Failed<br>
                            > imports' in there, and the names after
                            that are modules that failed<br>
                            > to import - for each of those if you
                            try to import it in python,<br>
                            > you'll find the cause, and there's
                            likely just one cause.<br>
                            <br>
                            Thanks Robert, I tried following your leads
                            but got nowhere, perhaps I<br>
                            need a few more clues.<br>
                            <br>
                            I am not familiar with bzr (nor baz), and it
                            wasn't obvious to me how to<br>
                            fit that into my workflow --- which was:<br>
                            (1) install DevStack<br>
                            (2) install libmysqlclient-dev<br>
                            (3) install flake8<br>
                            (4) cd /opt/stack/heat<br>
                            (5) ./run_tests.sh<br>
                            <br>
                            I guessed that your (A) would apply if I use
                            a venv and go between (1) the<br>
                            `python tools/install_venv.py` inside
                            run_tests.sh and (2) the invocation<br>
                            inside run_tests.sh of its run_tests
                            function.  So I manually invoked<br>
                            `python tools/install_venv.py`, then entered
                            that venv, then issued your<br>
                            commands of (A) (discovered I needed to
                            install bzr and did so), then<br>
                            exited that venv, then invoked heat's
                            `run_tests -V -u` to use the venv<br>
                            thus constructed.  It still produced one
                            huge line of output.  Here I<br>
                            attach a typescript of that:<br>
                            <br>
                            <br>
                            <br>
                            You will see that the huge line still ends
                            with something about import<br>
                            error, and now lists one additional package
                            ---<br>
                            heat.tests.test_neutron_firewalld.  I then
                            tried your (B), testing manual<br>
                            imports.   All worked except for the last,
                            which failed because there is<br>
                            indeed no such thing (why is there a
                            spurrious 'd' at the end of the<br>
                            package name?).  Here is a typescript of
                            that:<br>
                            <br>
                            <br>
                            <br>
                            Thanks,<br>
                            Mike<br>
                            <div>-------------- next part
                              --------------<br>
                              An HTML attachment was scrubbed...<br>
                            </div>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment.html" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment.html</a>><br>

                            -------------- next part --------------<br>
                            An embedded and charset-unspecified text was
                            scrubbed...<br>
                            Name: testlog.txt<br>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment.txt" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment.txt</a>><br>

                            -------------- next part --------------<br>
                            An embedded and charset-unspecified text was
                            scrubbed...<br>
                            Name: testlog2.txt<br>
                            URL: <<a href="http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment-0001.txt" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment-0001.txt</a>><br>

                            <br>
                            ------------------------------<br>
                            <div><br>
_______________________________________________<br>
                              OpenStack-dev mailing list<br>
                              <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                              <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              <br>
                              <br>
                            </div>
                            End of OpenStack-dev Digest, Vol 22, Issue
                            45<br>
*********************************************<br>
                            <div>
                              <div><br>
_______________________________________________<br>
                                OpenStack-dev mailing list<br>
                                <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                                <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                              </div>
                            </div>
                          </blockquote>
                        </div>
                        <br>
                        <br clear="all">
                        <div><br>
                        </div>
                        -- <br>
                        <div dir="ltr">------------------------------------------<br>
                          Telles Mota Vidal Nobrega
                          <div>Bsc in Computer Science at UFCG<br>
                            Software Engineer at PulsarOpenStack Project
                            - HP/LSD-UFCG</div>
                        </div>
                      </div>
                      <br>
                      <fieldset></fieldset>
                      <br>
                      <pre>_______________________________________________
OpenStack-dev mailing list
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
                    </blockquote>
                    <br>
                  </div>
                  _______________________________________________<br>
                  OpenStack-dev mailing list<br>
                  <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
                  <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
                </blockquote>
              </div>
              <br>
              <br>
              <fieldset></fieldset>
              <br>
              <pre>_______________________________________________
OpenStack-dev mailing list
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
            </blockquote>
            <br>
          </div>
          _______________________________________________<br>
          OpenStack-dev mailing list<br>
          <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
        </blockquote>
      </div>
      <br>
      <br>
      <fieldset></fieldset>
      <br>
      <pre>_______________________________________________
OpenStack-dev mailing list
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
    </blockquote>
    <br>
  </div>

<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">Raildo Mascena<br>Bacharel em Ciência da Computação - UFCG<br><div>Desenvolvedor no Laboratório de Sistemas Distribuidos - UFCG<br></div></div>

</div>