[openstack-dev] Hierarchicical Multitenancy Discussion

Adam Young ayoung at redhat.com
Wed Feb 26 04:14:38 UTC 2014


On 02/20/2014 05:18 PM, Vishvananda Ishaya wrote:
>
> On Feb 19, 2014, at 5:58 PM, Adam Young <ayoung at redhat.com 
> <mailto:ayoung at redhat.com>> wrote:
>
>> On 02/18/2014 02:28 PM, Vishvananda Ishaya wrote:
>>>
>>> On Feb 18, 2014, at 11:04 AM, Adam Young <ayoung at redhat.com 
>>> <mailto:ayoung at redhat.com>> wrote:
>>>
>>>> On 02/18/2014 12:53 PM, Telles Nobrega wrote:
>>>>> Hello everyone,
>>>>>
>>>>> Me and Raildo were responsible to implement Hierarchical Projects 
>>>>> in Keystone.
>>>>>
>>>>> Here is our first prototype: 
>>>>> https://github.com/tellesnobrega/keystone_hierarchical_projects
>>>>>
>>>>> We want to have it tested with Vishy's implementation this week.
>>>>>
>>>>> Here is a  guide on how to test the implementation:
>>>>>
>>>>> 1. Start a devstack using the keystone code;
>>>>> 2. Create a new project using the following body:
>>>>> {
>>>>>     "project": {
>>>>>         "description": "test_project",
>>>>>         "domain_id": "default",
>>>>>         "parent_project_id": "$parent_project_id",
>>>>>         "enabled": true,
>>>>>         "name": "test_project"
>>>>>     }
>>>>> }
>>>>>
>>>>> 3. Give an user a role in the project;
>>>>> 4. Get a token for "test_project" and check that the hierarchy is 
>>>>> there like the following:
>>>>> {
>>>>>      "token": {
>>>>>          "methods": [
>>>>>              "password"
>>>>>          ],
>>>>>          "roles": [
>>>>>              {
>>>>>                  "id": "c60f0d7461354749ae8ac8bace3e35c5",
>>>>>                  "name": "admin"
>>>>>              }
>>>>>          ],
>>>>>          "expires_at": "2014-02-18T15:52:03.499433Z",
>>>>>          "project": {
>>>>>              "hierarchical_ids": "
>>>>> openstack.
>>>>> 8a4ebcf44ebc47e0b98d3d5780c1f71a.de2a7135b01344cd82a02117c005ce47",
>>>>
>>>> These should be names, not Ids.  There is going to be a need to 
>>>> move projecst around inside the hierarchy, and the ID stays the 
>>>> same.  Lets get this right up front.
>>>
>>> Can you give more detail here? I can see arguments for both ways of 
>>> doing this but continuing to use ids for ownership is an easier 
>>> choice. Here is my thinking:
>>>
>>> 1. all of the projects use ids for ownership currently so it is a 
>>> smaller change
>> That does not change.  It is the hierarchy that is labeled by name.
>
> The issue is that we are storing the hierarchy of ownership in nova. 
> We can either store the hierarchy by id or by name. Note that we are 
> not adding a new field for this hierarchy but using the existing 
> ownership field (which is called project_id in nova). My point is that 
> if we use ids, then this field would be backwards compatible. If we 
> decide to use name instead (which has some advantages for display 
> purposes), then we would need some kind of db sync migration which 
> modifies all of the fields from id -> name.
>>
>>> 2. renaming a project in keystone would not invalidate the ownership 
>>> hierarchy (Note that moving a project around would invalidate the 
>>> hierarchy in both cases)
>>>
>> Renaming would not change anything.
>>
>> I would say the rule should be this:  Ids are basically uuids, and 
>> are immutable.  Names a mutable.  Each project has a parent Id.  A 
>> project can either be referenced directly by ID, oir hierarchically 
>> by name.  In addition, you can navigate to a project by traversing 
>> the set of ids, but you need to know where you are going.  THus the 
>> array
>>
>> ['abcd1234',fedd3213','3e3e3e3e'] would be a way to find a project, 
>> but the project ID for the lead node would still be just '3e3e3e3e'.
>
> As I mention above, all of this makes sense inside of keystone, but 
> doesn't address the problem of how we are storing the hierarchy on the 
> nova side. The owner field in nova can be:
>
> 1) abcd1234.fedd3213.3e3e3e3e
>
> or it can be:
>
> 2) orga.proja.suba

Owner should be separate from project.  But that is an aside.  I think 
you are mixing two ideas together.  Lets sit down at the summit to clear 
this up, but the Ids should not be hierarchical, the names should, and 
if you mess with that, it is going to be, well, a mess....

We have a lot going on getting ready for Icehouse 3, and I don't want to 
be rushed on this, as we will have to live with it for a long time.  
Nova is not the only consumer of projects, and we need to make something 
that works across the board.


>
> To explicitly state the tradeoffs
>
> 1 is backwards compatible +
We are actually doing something like this for domain users: 
userid@@domainid  where both are UUIDs (or possible userid comes out of 
ldap) but the hierarchy there is only two level.  It is necessary there 
because one part is assigned by keysotne (domain id) and one part by 
LDAP or the remote IdP.
> 1 doesn't need to be updated if a project is renamed +
But it does need to be redone of the project gets moved in the 
hierarchy, and we have a pre-existing feature request for that.

> 1 is not user friendly (need to map ids to names to display to the user) -
You need to walk the tree to generate the "good" name.  But that can 
also be used to navigate.  Path names like URLs are unsurprising. 
Hieriarch IDs are not.
>
> both need to be updated if a project is moved in the hierarchy
Notif the project only knows its local name.

Owner can continue to be the short id.  You onlky need to map to 
translate for readability.  Its like SQL:  use a view to denormalize.
>
> Vish
>
>>
>>
>>> OTOTH the advantage of names is it makes displaying the ownership 
>>> much easier on the service side.
>>>
>>> Vish
>>>
>>>>
>>>>>              "hierarchy": "test1",
>>>>>              "domain": {
>>>>>                  "id": "default",
>>>>>                  "name": "Default"
>>>>>              },
>>>>>              "id": "de2a7135b01344cd82a02117c005ce47",
>>>>>              "name": "test1"
>>>>>          },
>>>>>          "extras": {},
>>>>>          "user": {
>>>>>              "domain": {
>>>>>                  "id": "default",
>>>>>                  "name": "Default"
>>>>>              },
>>>>>              "id": "895864161f1e4beaae42d9392ec105c8",
>>>>>              "name": "admin"
>>>>>          },
>>>>>          "issued_at": "2014-02-18T14:52:03.499478Z"
>>>>>      }
>>>>> }
>>>>>
>>>>> Openstack is the root project of the tree, it can be seen also 
>>>>> when getting a token for the admin project or other default 
>>>>> project in Devstack.
>>>>>
>>>>> Hope to hear your feedbacks soon.
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>> On Mon, Feb 17, 2014 at 6:09 AM, Vinod Kumar Boppanna 
>>>>> <vinod.kumar.boppanna at cern.ch 
>>>>> <mailto:vinod.kumar.boppanna at cern.ch>> wrote:
>>>>>
>>>>>     Dear Vish,
>>>>>
>>>>>     I will change the concept of parsing roles upto leaf node to
>>>>>     parsing the roles to the top upto the level 1. But i have
>>>>>     small doubt and i want to confirm with you before doing this
>>>>>     change.
>>>>>
>>>>>     If there are lets say 10 levels in the hierarchy and the user
>>>>>     is getting authenticated at level 9. Should i check the roles
>>>>>     starting from level 9 upto level 1. Ofcourse, the difference
>>>>>     here is (compared to what i put in the wiki page) that, only
>>>>>     roles at each level (if different) needs to be added to scope
>>>>>     and no need of adding the project name and role individually.
>>>>>     Is this ok, considering the fact that the more deeper in the
>>>>>     hierarchy the user is getting authenticated, the more time
>>>>>     needed to parse upto the level 1.
>>>>>
>>>>>     I will wait for your response and then modify the POC accordingly.
>>>>>
>>>>>     Thanks & Regards,
>>>>>     Vinod Kumar Boppanna
>>>>>     ________________________________________
>>>>>     From: openstack-dev-request at lists.openstack.org
>>>>>     <mailto:openstack-dev-request at lists.openstack.org>
>>>>>     [openstack-dev-request at lists.openstack.org
>>>>>     <mailto:openstack-dev-request at lists.openstack.org>]
>>>>>     Sent: 16 February 2014 22:21
>>>>>     To: openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>
>>>>>     Subject: OpenStack-dev Digest, Vol 22, Issue 45
>>>>>
>>>>>     Send OpenStack-dev mailing list submissions to
>>>>>     openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>
>>>>>
>>>>>     To subscribe or unsubscribe via the World Wide Web, visit
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     or, via email, send a message with subject or body 'help' to
>>>>>     openstack-dev-request at lists.openstack.org
>>>>>     <mailto:openstack-dev-request at lists.openstack.org>
>>>>>
>>>>>     You can reach the person managing the list at
>>>>>     openstack-dev-owner at lists.openstack.org
>>>>>     <mailto:openstack-dev-owner at lists.openstack.org>
>>>>>
>>>>>     When replying, please edit your Subject line so it is more
>>>>>     specific
>>>>>     than "Re: Contents of OpenStack-dev digest..."
>>>>>
>>>>>
>>>>>     Today's Topics:
>>>>>
>>>>>        1. Re: [Nova][VMWare] VMwareVCDriver related to resize/cold
>>>>>           migration (Gary Kotton)
>>>>>        2. [Neutron]Do you think tanent_id should be verified (Dong
>>>>>     Liu)
>>>>>        3. Re: [Nova][VMWare] VMwareVCDriver related to resize/cold
>>>>>           migration (Jay Lau)
>>>>>        4. [neutron][policy] Using network services with network
>>>>>           policies (Mohammad Banikazemi)
>>>>>        5. Re: [Nova][VMWare] VMwareVCDriver related to resize/cold
>>>>>           migration (Jay Lau)
>>>>>        6. Re: [keystone] role of Domain in VPC definition (Harshad
>>>>>     Nakil)
>>>>>        7. Re: VPC Proposal (Harshad Nakil)
>>>>>        8. Re: VPC Proposal (Allamaraju, Subbu)
>>>>>        9. Re: VPC Proposal (Harshad Nakil)
>>>>>       10. Re: VPC Proposal (Martin, JC)
>>>>>       11. Re: [keystone] role of Domain in VPC definition
>>>>>           (Allamaraju, Subbu)
>>>>>       12. Re: [keystone] role of Domain in VPC definition (Harshad
>>>>>     Nakil)
>>>>>       13. Re: [keystone] role of Domain in VPC definition
>>>>>           (Allamaraju, Subbu)
>>>>>       14. Re: [OpenStack-Infra] [TripleO] promoting devtest_seed and
>>>>>           devtest_undercloud to voting, + experimental queue for
>>>>>           nova/neutron etc. (Robert Collins)
>>>>>       15. Re: [OpenStack-Infra] [TripleO] promoting devtest_seed and
>>>>>           devtest_undercloud to voting, + experimental queue for
>>>>>           nova/neutron etc. (Robert Collins)
>>>>>       16. Re: [keystone] role of Domain in VPC definition (Ravi
>>>>>     Chunduru)
>>>>>       17. Re: VPC Proposal (Ravi Chunduru)
>>>>>       18. Re: OpenStack-dev Digest, Vol 22, Issue 39 (Vishvananda
>>>>>     Ishaya)
>>>>>       19. Re: heat run_tests.sh fails with one huge line    of    
>>>>>      output
>>>>>           (Mike Spreitzer)
>>>>>
>>>>>
>>>>>     ----------------------------------------------------------------------
>>>>>
>>>>>     Message: 1
>>>>>     Date: Sun, 16 Feb 2014 05:40:05 -0800
>>>>>     From: Gary Kotton <gkotton at vmware.com <mailto:gkotton at vmware.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [Nova][VMWare] VMwareVCDriver
>>>>>     related to
>>>>>             resize/cold migration
>>>>>     Message-ID: <CF268BE4.465C7%gkotton at vmware.com
>>>>>     <mailto:CF268BE4.465C7%25gkotton at vmware.com>>
>>>>>     Content-Type: text/plain; charset="us-ascii"
>>>>>
>>>>>     Hi,
>>>>>     There are two issues here.
>>>>>     The first is a bug fix that is in review:
>>>>>     - https://review.openstack.org/#/c/69209/ (this is where they
>>>>>     have the same configuration)
>>>>>     The second is WIP:
>>>>>     - https://review.openstack.org/#/c/69262/ (we need to restore)
>>>>>     Thanks
>>>>>     Gary
>>>>>
>>>>>     From: Jay Lau <jay.lau.513 at gmail.com
>>>>>     <mailto:jay.lau.513 at gmail.com><mailto:jay.lau.513 at gmail.com
>>>>>     <mailto:jay.lau.513 at gmail.com>>>
>>>>>     Reply-To: "OpenStack Development Mailing List (not for usage
>>>>>     questions)" <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>>
>>>>>     Date: Sunday, February 16, 2014 6:39 AM
>>>>>     To: OpenStack Development Mailing List
>>>>>     <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>>
>>>>>     Subject: [openstack-dev] [Nova][VMWare] VMwareVCDriver related
>>>>>     to resize/cold migration
>>>>>
>>>>>     Hey,
>>>>>
>>>>>     I have one question related with OpenStack
>>>>>     vmwareapi.VMwareVCDriver resize/cold migration.
>>>>>
>>>>>     The following is my configuration:
>>>>>
>>>>>      DC
>>>>>         |
>>>>>         |----Cluster1
>>>>>         |          |
>>>>>         |          |----9.111.249.56
>>>>>         |
>>>>>         |----Cluster2
>>>>>                    |
>>>>>                    |----9.111.249.49
>>>>>
>>>>>     Scenario 1:
>>>>>     I started two nova computes manage the two clusters:
>>>>>     1) nova-compute1.conf
>>>>>     cluster_name=Cluster1
>>>>>
>>>>>     2) nova-compute2.conf
>>>>>     cluster_name=Cluster2
>>>>>
>>>>>     3) Start up two nova computes on host1 and host2 separately
>>>>>     4) Create one VM instance and the VM instance was booted on
>>>>>     Cluster2 node  9.111.249.49
>>>>>     | OS-EXT-SRV-ATTR:host                 | host2 |
>>>>>     | OS-EXT-SRV-ATTR:hypervisor_hostname  | domain-c16(Cluster2)
>>>>>                 |
>>>>>     5) Cold migrate the VM instance
>>>>>     6) After migration finished, the VM goes to VERIFY_RESIZE
>>>>>     status, and "nova show" indicates that the VM now located on
>>>>>     host1:Cluster1
>>>>>     | OS-EXT-SRV-ATTR:host                 | host1 |
>>>>>     | OS-EXT-SRV-ATTR:hypervisor_hostname  | domain-c12(Cluster1)
>>>>>                 |
>>>>>     7) But from vSphere client, it indicates the the VM was still
>>>>>     running on Cluster2
>>>>>     8) Try to confirm the resize, confirm will be failed. The root
>>>>>     cause is that nova compute on host2 has no knowledge of
>>>>>     domain-c12(Cluster1)
>>>>>
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
>>>>>     line 2810, in do_confirm_resize
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp migration=migration)
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
>>>>>     line 2836, in _confirm_resize
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp network_info)
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 420, in confirm_migration
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp     _vmops =
>>>>>     self._get_vmops_for_compute_node(instance['node'])
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 523, in _get_vmops_for_compute_node
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp     resource =
>>>>>     self._get_resource_for_node(nodename)
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 515, in _get_resource_for_node
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp     raise exception.NotFound(msg)
>>>>>     2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp NotFound: NV-3AB798A The
>>>>>     resource domain-c12(Cluster1) does not exist
>>>>>     2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
>>>>>
>>>>>
>>>>>     Scenario 2:
>>>>>
>>>>>     1) Started two nova computes manage the two clusters, but the
>>>>>     two computes have same nova conf.
>>>>>     1) nova-compute1.conf
>>>>>     cluster_name=Cluster1
>>>>>     cluster_name=Cluster2
>>>>>
>>>>>     2) nova-compute2.conf
>>>>>     cluster_name=Cluster1
>>>>>     cluster_name=Cluster2
>>>>>
>>>>>     3) Then create and resize/cold migrate a VM, it can always
>>>>>     succeed.
>>>>>
>>>>>
>>>>>     Questions:
>>>>>     For multi-cluster management, does vmware require all nova
>>>>>     compute have same cluster configuration to make sure
>>>>>     resize/cold migration can succeed?
>>>>>
>>>>>     --
>>>>>     Thanks,
>>>>>
>>>>>     Jay
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/0b71a846/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 2
>>>>>     Date: Sun, 16 Feb 2014 22:52:01 +0800
>>>>>     From: Dong Liu <willowd878 at gmail.com
>>>>>     <mailto:willowd878 at gmail.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: [openstack-dev] [Neutron]Do you think tanent_id should be
>>>>>             verified
>>>>>     Message-ID: <26565D39-5372-48A5-8299-34DDE6C3394D at gmail.com
>>>>>     <mailto:26565D39-5372-48A5-8299-34DDE6C3394D at gmail.com>>
>>>>>     Content-Type: text/plain; charset=us-ascii
>>>>>
>>>>>     Hi stackers:
>>>>>
>>>>>     I found that when creating network subnet and other resources,
>>>>>     the attribute tenant_id
>>>>>     can be set by admin tenant. But we did not verify that if the
>>>>>     tanent_id is real in keystone.
>>>>>
>>>>>     I know that we could use neutron without keystone, but do you
>>>>>     think tenant_id should
>>>>>     be verified when we using neutron with keystone.
>>>>>
>>>>>     thanks
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 3
>>>>>     Date: Sun, 16 Feb 2014 23:01:17 +0800
>>>>>     From: Jay Lau <jay.lau.513 at gmail.com
>>>>>     <mailto:jay.lau.513 at gmail.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [Nova][VMWare] VMwareVCDriver
>>>>>     related to
>>>>>             resize/cold migration
>>>>>     Message-ID:
>>>>>            
>>>>>     <CAFyztAFqTUqTZzzW6BkH6-9_kye9ZGm8yhZe3hMUoW1xFfQM7A at mail.gmail.com
>>>>>     <mailto:CAFyztAFqTUqTZzzW6BkH6-9_kye9ZGm8yhZe3hMUoW1xFfQM7A at mail.gmail.com>>
>>>>>     Content-Type: text/plain; charset="iso-8859-1"
>>>>>
>>>>>     Thanks Gary, clear now. ;-)
>>>>>
>>>>>
>>>>>     2014-02-16 21:40 GMT+08:00 Gary Kotton <gkotton at vmware.com
>>>>>     <mailto:gkotton at vmware.com>>:
>>>>>
>>>>>     > Hi,
>>>>>     > There are two issues here.
>>>>>     > The first is a bug fix that is in review:
>>>>>     > - https://review.openstack.org/#/c/69209/ (this is where
>>>>>     they have the
>>>>>     > same configuration)
>>>>>     > The second is WIP:
>>>>>     > - https://review.openstack.org/#/c/69262/ (we need to restore)
>>>>>     > Thanks
>>>>>     > Gary
>>>>>     >
>>>>>     > From: Jay Lau <jay.lau.513 at gmail.com
>>>>>     <mailto:jay.lau.513 at gmail.com>>
>>>>>     > Reply-To: "OpenStack Development Mailing List (not for usage
>>>>>     questions)" <
>>>>>     > openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     > Date: Sunday, February 16, 2014 6:39 AM
>>>>>     > To: OpenStack Development Mailing List
>>>>>     <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     > Subject: [openstack-dev] [Nova][VMWare] VMwareVCDriver
>>>>>     related to
>>>>>     > resize/cold migration
>>>>>     >
>>>>>     > Hey,
>>>>>     >
>>>>>     > I have one question related with OpenStack
>>>>>     vmwareapi.VMwareVCDriver
>>>>>     > resize/cold migration.
>>>>>     >
>>>>>     > The following is my configuration:
>>>>>     >
>>>>>     >  DC
>>>>>     >     |
>>>>>     >     |----Cluster1
>>>>>     >     |          |
>>>>>     >     |          |----9.111.249.56
>>>>>     >     |
>>>>>     >     |----Cluster2
>>>>>     >                |
>>>>>     >                |----9.111.249.49
>>>>>     >
>>>>>     > *Scenario 1:*
>>>>>     > I started two nova computes manage the two clusters:
>>>>>     > 1) nova-compute1.conf
>>>>>     > cluster_name=Cluster1
>>>>>     >
>>>>>     > 2) nova-compute2.conf
>>>>>     > cluster_name=Cluster2
>>>>>     >
>>>>>     > 3) Start up two nova computes on host1 and host2 separately
>>>>>     > 4) Create one VM instance and the VM instance was booted on
>>>>>     Cluster2 node
>>>>>     > 9.111.249.49
>>>>>     > | OS-EXT-SRV-ATTR:host | host2 |
>>>>>     > | OS-EXT-SRV-ATTR:hypervisor_hostname  |
>>>>>     > domain-c16(Cluster2)                   |
>>>>>     > 5) Cold migrate the VM instance
>>>>>     > 6) After migration finished, the VM goes to VERIFY_RESIZE
>>>>>     status, and
>>>>>     > "nova show" indicates that the VM now located on host1:Cluster1
>>>>>     > | OS-EXT-SRV-ATTR:host | host1 |
>>>>>     > | OS-EXT-SRV-ATTR:hypervisor_hostname  |
>>>>>     > domain-c12(Cluster1)                   |
>>>>>     > 7) But from vSphere client, it indicates the the VM was
>>>>>     still running on
>>>>>     > Cluster2
>>>>>     > 8) Try to confirm the resize, confirm will be failed. The
>>>>>     root cause is
>>>>>     > that nova compute on host2 has no knowledge of
>>>>>     domain-c12(Cluster1)
>>>>>     >
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     > "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
>>>>>     line 2810, in
>>>>>     > do_confirm_resize
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     > migration=migration)
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     > "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
>>>>>     line 2836, in
>>>>>     > _confirm_resize
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     > network_info)
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 420,
>>>>>     > in confirm_migration
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     > _vmops = self._get_vmops_for_compute_node(instance['node'])
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 523,
>>>>>     > in _get_vmops_for_compute_node
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     > resource = self._get_resource_for_node(nodename)
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 515,
>>>>>     > in _get_resource_for_node
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     > raise exception.NotFound(msg)
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     > NotFound: NV-3AB798A The resource domain-c12(Cluster1) does
>>>>>     not exist
>>>>>     > 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >
>>>>>     >
>>>>>     > *Scenario 2:*
>>>>>     >
>>>>>     > 1) Started two nova computes manage the two clusters, but
>>>>>     the two computes
>>>>>     > have same nova conf.
>>>>>     > 1) nova-compute1.conf
>>>>>     > cluster_name=Cluster1
>>>>>     > cluster_name=Cluster2
>>>>>     >
>>>>>     > 2) nova-compute2.conf
>>>>>     > cluster_name=Cluster1
>>>>>     > cluster_name=Cluster2
>>>>>     >
>>>>>     > 3) Then create and resize/cold migrate a VM, it can always
>>>>>     succeed.
>>>>>     >
>>>>>     >
>>>>>     > *Questions:*
>>>>>     > For multi-cluster management, does vmware require all nova
>>>>>     compute have
>>>>>     > same cluster configuration to make sure resize/cold
>>>>>     migration can succeed?
>>>>>     >
>>>>>     > --
>>>>>     > Thanks,
>>>>>     >
>>>>>     > Jay
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >
>>>>>     >
>>>>>
>>>>>
>>>>>     --
>>>>>     Thanks,
>>>>>
>>>>>     Jay
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/a5a2ed40/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 4
>>>>>     Date: Sun, 16 Feb 2014 10:27:41 -0500
>>>>>     From: Mohammad Banikazemi <mb at us.ibm.com <mailto:mb at us.ibm.com>>
>>>>>     To: "OpenStack Development Mailing List \(not for usage
>>>>>     questions\)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: [openstack-dev] [neutron][policy] Using network
>>>>>     services with
>>>>>             network policies
>>>>>     Message-ID:
>>>>>            
>>>>>     <OF456914EA.334156E1-ON85257C81.0051DB09-85257C81.0054EF2C at us.ibm.com
>>>>>     <mailto:OF456914EA.334156E1-ON85257C81.0051DB09-85257C81.0054EF2C at us.ibm.com>>
>>>>>     Content-Type: text/plain; charset="us-ascii"
>>>>>
>>>>>
>>>>>     During the last IRC call we started talking about network
>>>>>     services and how
>>>>>     they can be integrated into the group Policy framework.
>>>>>
>>>>>     In particular, with the "redirect" action we need to think how
>>>>>     we can
>>>>>     specify the network services we want to redirect the traffic
>>>>>     to/from. There
>>>>>     has been a substantial work in the area of service chaining
>>>>>     and service
>>>>>     insertion and in the last summit "advanced service" in VMs
>>>>>     were discussed.
>>>>>     I think the first step for us is to find out the status of
>>>>>     those efforts
>>>>>     and then see how we can use them. Here are a few questions
>>>>>     that come to
>>>>>     mind.
>>>>>     1- What is the status of service chaining, service insertion
>>>>>     and advanced
>>>>>     services work?
>>>>>     2- How could we use a service chain? Would simply referring to
>>>>>     it in the
>>>>>     action be enough? Are there considerations wrt creating a
>>>>>     service chain
>>>>>     and/or a service VM for use with the Group Policy framework
>>>>>     that need to be
>>>>>     taken into account?
>>>>>
>>>>>     Let's start the discussion on the ML before taking it to the
>>>>>     next call.
>>>>>
>>>>>     Thanks,
>>>>>
>>>>>     Mohammad
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/efff7427/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 5
>>>>>     Date: Sun, 16 Feb 2014 23:29:49 +0800
>>>>>     From: Jay Lau <jay.lau.513 at gmail.com
>>>>>     <mailto:jay.lau.513 at gmail.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [Nova][VMWare] VMwareVCDriver
>>>>>     related to
>>>>>             resize/cold migration
>>>>>     Message-ID:
>>>>>            
>>>>>     <CAFyztAFCc1NH5nz00Dii3dhL3AN8RjPLb3D65aFMRGfyQiJGKA at mail.gmail.com
>>>>>     <mailto:CAFyztAFCc1NH5nz00Dii3dhL3AN8RjPLb3D65aFMRGfyQiJGKA at mail.gmail.com>>
>>>>>     Content-Type: text/plain; charset="iso-8859-1"
>>>>>
>>>>>     Hi Gary,
>>>>>
>>>>>     One more question, when using VCDriver, I can use it in the
>>>>>     following two
>>>>>     ways:
>>>>>     1) start up many nova computes and those nova computes manage
>>>>>     same vcenter
>>>>>     clusters.
>>>>>     2) start up many nova computes and those nova computes manage
>>>>>     different
>>>>>     vcenter clusters.
>>>>>
>>>>>     Do we have some best practice for above two scenarios or else
>>>>>     can you
>>>>>     please provide some best practise for VCDriver? I did not get
>>>>>     much info
>>>>>     from admin guide.
>>>>>
>>>>>     Thanks,
>>>>>
>>>>>     Jay
>>>>>
>>>>>
>>>>>     2014-02-16 23:01 GMT+08:00 Jay Lau <jay.lau.513 at gmail.com
>>>>>     <mailto:jay.lau.513 at gmail.com>>:
>>>>>
>>>>>     > Thanks Gary, clear now. ;-)
>>>>>     >
>>>>>     >
>>>>>     > 2014-02-16 21:40 GMT+08:00 Gary Kotton <gkotton at vmware.com
>>>>>     <mailto:gkotton at vmware.com>>:
>>>>>     >
>>>>>     >> Hi,
>>>>>     >> There are two issues here.
>>>>>     >> The first is a bug fix that is in review:
>>>>>     >> - https://review.openstack.org/#/c/69209/ (this is where
>>>>>     they have the
>>>>>     >> same configuration)
>>>>>     >> The second is WIP:
>>>>>     >> - https://review.openstack.org/#/c/69262/ (we need to restore)
>>>>>     >> Thanks
>>>>>     >> Gary
>>>>>     >>
>>>>>     >> From: Jay Lau <jay.lau.513 at gmail.com
>>>>>     <mailto:jay.lau.513 at gmail.com>>
>>>>>     >> Reply-To: "OpenStack Development Mailing List (not for
>>>>>     usage questions)"
>>>>>     >> <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     >> Date: Sunday, February 16, 2014 6:39 AM
>>>>>     >> To: OpenStack Development Mailing List
>>>>>     <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>
>>>>>     >> >
>>>>>     >> Subject: [openstack-dev] [Nova][VMWare] VMwareVCDriver
>>>>>     related to
>>>>>     >> resize/cold migration
>>>>>     >>
>>>>>     >> Hey,
>>>>>     >>
>>>>>     >> I have one question related with OpenStack
>>>>>     vmwareapi.VMwareVCDriver
>>>>>     >> resize/cold migration.
>>>>>     >>
>>>>>     >> The following is my configuration:
>>>>>     >>
>>>>>     >>  DC
>>>>>     >>     |
>>>>>     >>     |----Cluster1
>>>>>     >>     |          |
>>>>>     >>     |          |----9.111.249.56
>>>>>     >>     |
>>>>>     >>     |----Cluster2
>>>>>     >>                |
>>>>>     >>                |----9.111.249.49
>>>>>     >>
>>>>>     >> *Scenario 1:*
>>>>>     >> I started two nova computes manage the two clusters:
>>>>>     >> 1) nova-compute1.conf
>>>>>     >> cluster_name=Cluster1
>>>>>     >>
>>>>>     >> 2) nova-compute2.conf
>>>>>     >> cluster_name=Cluster2
>>>>>     >>
>>>>>     >> 3) Start up two nova computes on host1 and host2 separately
>>>>>     >> 4) Create one VM instance and the VM instance was booted on
>>>>>     Cluster2
>>>>>     >> node  9.111.249.49
>>>>>     >> | OS-EXT-SRV-ATTR:host     | host2 |
>>>>>     >> | OS-EXT-SRV-ATTR:hypervisor_hostname  |
>>>>>     >> domain-c16(Cluster2)                       |
>>>>>     >> 5) Cold migrate the VM instance
>>>>>     >> 6) After migration finished, the VM goes to VERIFY_RESIZE
>>>>>     status, and
>>>>>     >> "nova show" indicates that the VM now located on host1:Cluster1
>>>>>     >> | OS-EXT-SRV-ATTR:host     | host1 |
>>>>>     >> | OS-EXT-SRV-ATTR:hypervisor_hostname  |
>>>>>     >> domain-c12(Cluster1)                       |
>>>>>     >> 7) But from vSphere client, it indicates the the VM was
>>>>>     still running on
>>>>>     >> Cluster2
>>>>>     >> 8) Try to confirm the resize, confirm will be failed. The
>>>>>     root cause is
>>>>>     >> that nova compute on host2 has no knowledge of
>>>>>     domain-c12(Cluster1)
>>>>>     >>
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >> "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
>>>>>     line 2810, in
>>>>>     >> do_confirm_resize
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >> migration=migration)
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >> "/usr/lib/python2.6/site-packages/nova/compute/manager.py",
>>>>>     line 2836, in
>>>>>     >> _confirm_resize
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >> network_info)
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >>
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 420,
>>>>>     >> in confirm_migration
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >> _vmops = self._get_vmops_for_compute_node(instance['node'])
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >>
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 523,
>>>>>     >> in _get_vmops_for_compute_node
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >> resource = self._get_resource_for_node(nodename)
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp   File
>>>>>     >>
>>>>>     "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py",
>>>>>     line 515,
>>>>>     >> in _get_resource_for_node
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >> raise exception.NotFound(msg)
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >> NotFound: NV-3AB798A The resource domain-c12(Cluster1) does
>>>>>     not exist
>>>>>     >> 2014-02-16 07:10:17.166 12720 TRACE
>>>>>     nova.openstack.common.rpc.amqp
>>>>>     >>
>>>>>     >>
>>>>>     >> *Scenario 2:*
>>>>>     >>
>>>>>     >> 1) Started two nova computes manage the two clusters, but
>>>>>     the two
>>>>>     >> computes have same nova conf.
>>>>>     >> 1) nova-compute1.conf
>>>>>     >> cluster_name=Cluster1
>>>>>     >> cluster_name=Cluster2
>>>>>     >>
>>>>>     >> 2) nova-compute2.conf
>>>>>     >> cluster_name=Cluster1
>>>>>     >> cluster_name=Cluster2
>>>>>     >>
>>>>>     >> 3) Then create and resize/cold migrate a VM, it can always
>>>>>     succeed.
>>>>>     >>
>>>>>     >>
>>>>>     >> *Questions:*
>>>>>     >> For multi-cluster management, does vmware require all nova
>>>>>     compute have
>>>>>     >> same cluster configuration to make sure resize/cold
>>>>>     migration can succeed?
>>>>>     >>
>>>>>     >> --
>>>>>     >> Thanks,
>>>>>     >>
>>>>>     >> Jay
>>>>>     >>
>>>>>     >> _______________________________________________
>>>>>     >> OpenStack-dev mailing list
>>>>>     >> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >>
>>>>>     >>
>>>>>     >
>>>>>     >
>>>>>     > --
>>>>>     > Thanks,
>>>>>     >
>>>>>     > Jay
>>>>>     >
>>>>>
>>>>>
>>>>>
>>>>>     --
>>>>>     Thanks,
>>>>>
>>>>>     Jay
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/e7da9e73/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 6
>>>>>     Date: Sun, 16 Feb 2014 08:01:14 -0800
>>>>>     From: Harshad Nakil <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [keystone] role of Domain in VPC
>>>>>             definition
>>>>>     Message-ID: <-4426752061342328447 at unknownmsgid>
>>>>>     Content-Type: text/plain; charset="iso-8859-1"
>>>>>
>>>>>     Yes, [1] can be done without [2] and [3].
>>>>>     As you are well aware [2] is now merged with group policy
>>>>>     discussions.
>>>>>     IMHO all or nothing approach will not get us anywhere.
>>>>>     By the time we line up all our ducks in row. New
>>>>>     features/ideas/blueprints
>>>>>     will keep Emerging.
>>>>>
>>>>>     Regards
>>>>>     -Harshad
>>>>>
>>>>>
>>>>>     On Feb 16, 2014, at 2:30 AM, Salvatore Orlando
>>>>>     <sorlando at nicira.com <mailto:sorlando at nicira.com>> wrote:
>>>>>
>>>>>     It seems this work item is made of several blueprints, some of
>>>>>     which are
>>>>>     not yet approved. This is true at least for the Neutron
>>>>>     blueprint regarding
>>>>>     policy extensions.
>>>>>
>>>>>     Since I first looked at this spec I've been wondering why nova
>>>>>     has been
>>>>>     selected as an endpoint for network operations rather than
>>>>>     Neutron, but
>>>>>     this probably a design/implementation details whereas JC here
>>>>>     is looking at
>>>>>     the general approach.
>>>>>
>>>>>     Nevertheless, my only point here is that is seems that
>>>>>     features like this
>>>>>     need an "all-or-none" approval.
>>>>>     For instance, could the VPC feature be considered functional
>>>>>     if blueprint
>>>>>     [1] is implemented, but not [2] and [3]?
>>>>>
>>>>>     Salvatore
>>>>>
>>>>>     [1] https://blueprints.launchpad.net/nova/+spec/aws-vpc-support
>>>>>     [2]
>>>>>     https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
>>>>>     [3]
>>>>>     https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
>>>>>
>>>>>
>>>>>     On 11 February 2014 21:45, Martin, JC <jch.martin at gmail.com
>>>>>     <mailto:jch.martin at gmail.com>> wrote:
>>>>>
>>>>>     > Ravi,
>>>>>     >
>>>>>     > It seems that the following Blueprint
>>>>>     > https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support
>>>>>     >
>>>>>     > has been approved.
>>>>>     >
>>>>>     > However, I cannot find a discussion with regard to the merit
>>>>>     of using
>>>>>     > project vs. domain, or other mechanism for the implementation.
>>>>>     >
>>>>>     > I have an issue with this approach as it prevents tenants
>>>>>     within the same
>>>>>     > domain sharing the same VPC to have projects.
>>>>>     >
>>>>>     > As an example, if you are a large organization on AWS, it is
>>>>>     likely that
>>>>>     > you have a large VPC that will be shred by multiple
>>>>>     projects. With this
>>>>>     > proposal, we loose that capability, unless I missed something.
>>>>>     >
>>>>>     > JC
>>>>>     >
>>>>>     > On Dec 19, 2013, at 6:10 PM, Ravi Chunduru
>>>>>     <ravivsn at gmail.com <mailto:ravivsn at gmail.com>> wrote:
>>>>>     >
>>>>>     > > Hi,
>>>>>     > >   We had some internal discussions on role of Domain and
>>>>>     VPCs. I would
>>>>>     > like to expand and understand community thinking of Keystone
>>>>>     domain and
>>>>>     > VPCs.
>>>>>     > >
>>>>>     > > Is VPC equivalent to Keystone Domain?
>>>>>     > >
>>>>>     > > If so, as a public cloud provider - I create a Keystone
>>>>>     domain and give
>>>>>     > it to an organization which wants a virtual private cloud.
>>>>>     > >
>>>>>     > > Now the question is if that organization wants to have
>>>>>      departments wise
>>>>>     > allocation of resources it is becoming difficult to
>>>>>     visualize with existing
>>>>>     > v3 keystone constructs.
>>>>>     > >
>>>>>     > > Currently, it looks like each department of an
>>>>>     organization cannot have
>>>>>     > their own resource management with in the organization VPC (
>>>>>     LDAP based
>>>>>     > user management, network management or dedicating computes
>>>>>     etc.,) For us,
>>>>>     > Openstack Project does not match the requirements of a
>>>>>     department of an
>>>>>     > organization.
>>>>>     > >
>>>>>     > > I hope you guessed what we wanted - Domain must have VPCs
>>>>>     and VPC to
>>>>>     > have projects.
>>>>>     > >
>>>>>     > > I would like to know how community see the VPC model in
>>>>>     Openstack.
>>>>>     > >
>>>>>     > > Thanks,
>>>>>     > > -Ravi.
>>>>>     > >
>>>>>     > >
>>>>>     > > _______________________________________________
>>>>>     > > OpenStack-dev mailing list
>>>>>     > > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     > >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >
>>>>>
>>>>>     _______________________________________________
>>>>>     OpenStack-dev mailing list
>>>>>     OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/9258cf27/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 7
>>>>>     Date: Sun, 16 Feb 2014 08:47:19 -0800
>>>>>     From: Harshad Nakil <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] VPC Proposal
>>>>>     Message-ID:
>>>>>            
>>>>>     <CAL7PBMchfaSkX8amUAEe8X_fs9OM6ZLGJx_fNB2SUCJWPaGNFA at mail.gmail.com
>>>>>     <mailto:CAL7PBMchfaSkX8amUAEe8X_fs9OM6ZLGJx_fNB2SUCJWPaGNFA at mail.gmail.com>>
>>>>>     Content-Type: text/plain; charset="iso-8859-1"
>>>>>
>>>>>     Comments Inline
>>>>>
>>>>>     Regards
>>>>>     -Harshad
>>>>>
>>>>>
>>>>>     On Sat, Feb 15, 2014 at 11:39 PM, Allamaraju, Subbu
>>>>>     <subbu at subbu.org <mailto:subbu at subbu.org>> wrote:
>>>>>
>>>>>     > Harshad,
>>>>>     >
>>>>>     > Curious to know if there is a broad interest in an AWS
>>>>>     compatible API in
>>>>>     > the community?
>>>>>
>>>>>
>>>>>     We started looking at this as some our customers/partners were
>>>>>     interested
>>>>>     in get AWS API compatibility. We have this blueprint and code
>>>>>     review
>>>>>     pending for long time now. We will know based on this thread
>>>>>     wether the
>>>>>     community is interested. But I assumed that community was
>>>>>     interested as the
>>>>>     blueprint was approved and code review has no -1(s) for long
>>>>>     time now.
>>>>>
>>>>>
>>>>>     > To clarify, a clear incremental path from an AWS compatible
>>>>>     API to an
>>>>>     > OpenStack model is not clear.
>>>>>     >
>>>>>
>>>>>     In my mind AWS compatible API does not need new openstack
>>>>>     model. As more
>>>>>     discussion happen on JC's proposal and implementation becomes
>>>>>     clear we will
>>>>>     know how incremental is the path. But at high level there two
>>>>>     major
>>>>>     differences
>>>>>     1. New first class object will be introduced which effect all
>>>>>     components
>>>>>     2. more than one project can be supported within VPC.
>>>>>     But it does not change AWS API(s). So even in JC(s) model if
>>>>>     you want AWS
>>>>>     API then we will have to keep VPC to project mapping 1:1,
>>>>>     since the API
>>>>>     will not take both VPC ID and project ID.
>>>>>
>>>>>     As more users want to migrate from AWS or IaaS providers who
>>>>>     want compete
>>>>>     with AWS should be interested in this compatibility.
>>>>>
>>>>>     There also seems to be terminology issue here Whats is
>>>>>     definition of "VPC"
>>>>>     if we assume what AWS implements is "VPC"
>>>>>     then what JC is proposing "VOS" or "VDC" (virtual openstack or
>>>>>     virtual DC)
>>>>>     as all or most of current openstack features are available to
>>>>>     user in  this
>>>>>     new Abstraction. I actually like this new abstraction.
>>>>>
>>>>>
>>>>>     > Subbu
>>>>>     >
>>>>>     > On Feb 15, 2014, at 10:04 PM, Harshad Nakil
>>>>>     <hnakil at contrailsystems.com <mailto:hnakil at contrailsystems.com>>
>>>>>     > wrote:
>>>>>     >
>>>>>     > >
>>>>>     > > I agree with problem as defined by you and will require
>>>>>     more fundamental
>>>>>     > changes.
>>>>>     > > Meanwhile many users will benefit from AWS VPC api
>>>>>     compatibility.
>>>>>     >
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/5f655f01/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 8
>>>>>     Date: Sun, 16 Feb 2014 09:04:36 -0800
>>>>>     From: "Allamaraju, Subbu" <subbu at subbu.org
>>>>>     <mailto:subbu at subbu.org>>
>>>>>     To: Harshad Nakil <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>>
>>>>>     Cc: "OpenStack Development Mailing List \(not for usage
>>>>>     questions\)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] VPC Proposal
>>>>>     Message-ID: <641D4BA6-DFB2-4D3E-8D67-48F711ADC1B5 at subbu.org
>>>>>     <mailto:641D4BA6-DFB2-4D3E-8D67-48F711ADC1B5 at subbu.org>>
>>>>>     Content-Type: text/plain; charset=iso-8859-1
>>>>>
>>>>>     Harshad,
>>>>>
>>>>>     Thanks for clarifying.
>>>>>
>>>>>     > We started looking at this as some our customers/partners
>>>>>     were interested in get AWS API compatibility. We have this
>>>>>     blueprint and code review pending for long time now. We will
>>>>>     know based on this thread wether the community is interested.
>>>>>     But I assumed that community was interested as the blueprint
>>>>>     was approved and code review has no -1(s) for long time now.
>>>>>
>>>>>     Makes sense. I would leave it to others on this list to chime
>>>>>     in if there is sufficient interest or not.
>>>>>
>>>>>     > To clarify, a clear incremental path from an AWS compatible
>>>>>     API to an OpenStack model is not clear.
>>>>>     >
>>>>>     > In my mind AWS compatible API does not need new openstack
>>>>>     model. As more discussion happen on JC's proposal and
>>>>>     implementation becomes clear we will know how incremental is
>>>>>     the path. But at high level there two major differences
>>>>>     > 1. New first class object will be introduced which effect
>>>>>     all components
>>>>>     > 2. more than one project can be supported within VPC.
>>>>>     > But it does not change AWS API(s). So even in JC(s) model if
>>>>>     you want AWS API then we will have to keep VPC to project
>>>>>     mapping 1:1, since the API will not take both VPC ID and
>>>>>     project ID.
>>>>>     >
>>>>>     > As more users want to migrate from AWS or IaaS providers who
>>>>>     want compete with AWS should be interested in this compatibility.
>>>>>
>>>>>     IMHO that's a tough sell. Though an AWS compatible API does
>>>>>     not need an OpenStack abstraction, we would end up with two
>>>>>     independent ways of doing similar things. That would OpenStack
>>>>>     repeating itself!
>>>>>
>>>>>     Subbu
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 9
>>>>>     Date: Sun, 16 Feb 2014 09:12:54 -0800
>>>>>     From: Harshad Nakil <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>>
>>>>>     To: "Allamaraju, Subbu" <subbu at subbu.org <mailto:subbu at subbu.org>>
>>>>>     Cc: "OpenStack Development Mailing List \(not for usage
>>>>>     questions\)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] VPC Proposal
>>>>>     Message-ID: <516707826958554641 at unknownmsgid>
>>>>>     Content-Type: text/plain; charset=ISO-8859-1
>>>>>
>>>>>     IMHO I don't see two implementations. Since right now we have only
>>>>>     one. As a community if we decide to add new abstractions then
>>>>>     we will
>>>>>     have to change software in every component where the new
>>>>>     abstraction
>>>>>     makes difference. That's normal software development process.
>>>>>     Regards
>>>>>     -Harshad
>>>>>
>>>>>
>>>>>     > On Feb 16, 2014, at 9:03 AM, "Allamaraju, Subbu"
>>>>>     <subbu at subbu.org <mailto:subbu at subbu.org>> wrote:
>>>>>     >
>>>>>     > Harshad,
>>>>>     >
>>>>>     > Thanks for clarifying.
>>>>>     >
>>>>>     >> We started looking at this as some our customers/partners
>>>>>     were interested in get AWS API compatibility. We have this
>>>>>     blueprint and code review pending for long time now. We will
>>>>>     know based on this thread wether the community is interested.
>>>>>     But I assumed that community was interested as the blueprint
>>>>>     was approved and code review has no -1(s) for long time now.
>>>>>     >
>>>>>     > Makes sense. I would leave it to others on this list to
>>>>>     chime in if there is sufficient interest or not.
>>>>>     >
>>>>>     >> To clarify, a clear incremental path from an AWS compatible
>>>>>     API to an OpenStack model is not clear.
>>>>>     >>
>>>>>     >> In my mind AWS compatible API does not need new openstack
>>>>>     model. As more discussion happen on JC's proposal and
>>>>>     implementation becomes clear we will know how incremental is
>>>>>     the path. But at high level there two major differences
>>>>>     >> 1. New first class object will be introduced which effect
>>>>>     all components
>>>>>     >> 2. more than one project can be supported within VPC.
>>>>>     >> But it does not change AWS API(s). So even in JC(s) model
>>>>>     if you want AWS API then we will have to keep VPC to project
>>>>>     mapping 1:1, since the API will not take both VPC ID and
>>>>>     project ID.
>>>>>     >>
>>>>>     >> As more users want to migrate from AWS or IaaS providers
>>>>>     who want compete with AWS should be interested in this
>>>>>     compatibility.
>>>>>     >
>>>>>     > IMHO that's a tough sell. Though an AWS compatible API does
>>>>>     not need an OpenStack abstraction, we would end up with two
>>>>>     independent ways of doing similar things. That would OpenStack
>>>>>     repeating itself!
>>>>>     >
>>>>>     > Subbu
>>>>>     >
>>>>>     >
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 10
>>>>>     Date: Sun, 16 Feb 2014 09:25:02 -0800
>>>>>     From: "Martin, JC" <jch.martin at gmail.com
>>>>>     <mailto:jch.martin at gmail.com>>
>>>>>     To: "OpenStack Development Mailing List \(not for usage
>>>>>     questions\)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] VPC Proposal
>>>>>     Message-ID: <B1A58385-DC10-48EF-AA8E-90176F576A40 at gmail.com
>>>>>     <mailto:B1A58385-DC10-48EF-AA8E-90176F576A40 at gmail.com>>
>>>>>     Content-Type: text/plain; charset=us-ascii
>>>>>
>>>>>     Harshad,
>>>>>
>>>>>     I tried to find some discussion around this blueprint.
>>>>>     Could you provide us with some notes or threads  ?
>>>>>
>>>>>     Also, about the code review you mention. which one are you
>>>>>     talking about :
>>>>>     https://review.openstack.org/#/c/40071/
>>>>>     https://review.openstack.org/#/c/49470/
>>>>>     https://review.openstack.org/#/c/53171
>>>>>
>>>>>     because they are all abandoned.
>>>>>
>>>>>     Could you point me to the code, and update the BP because it
>>>>>     seems that the links are not correct.
>>>>>
>>>>>     Thanks,
>>>>>
>>>>>     JC
>>>>>     On Feb 16, 2014, at 9:04 AM, "Allamaraju, Subbu"
>>>>>     <subbu at subbu.org <mailto:subbu at subbu.org>> wrote:
>>>>>
>>>>>     > Harshad,
>>>>>     >
>>>>>     > Thanks for clarifying.
>>>>>     >
>>>>>     >> We started looking at this as some our customers/partners
>>>>>     were interested in get AWS API compatibility. We have this
>>>>>     blueprint and code review pending for long time now. We will
>>>>>     know based on this thread wether the community is interested.
>>>>>     But I assumed that community was interested as the blueprint
>>>>>     was approved and code review has no -1(s) for long time now.
>>>>>     >
>>>>>     > Makes sense. I would leave it to others on this list to
>>>>>     chime in if there is sufficient interest or not.
>>>>>     >
>>>>>     >> To clarify, a clear incremental path from an AWS compatible
>>>>>     API to an OpenStack model is not clear.
>>>>>     >>
>>>>>     >> In my mind AWS compatible API does not need new openstack
>>>>>     model. As more discussion happen on JC's proposal and
>>>>>     implementation becomes clear we will know how incremental is
>>>>>     the path. But at high level there two major differences
>>>>>     >> 1. New first class object will be introduced which effect
>>>>>     all components
>>>>>     >> 2. more than one project can be supported within VPC.
>>>>>     >> But it does not change AWS API(s). So even in JC(s) model
>>>>>     if you want AWS API then we will have to keep VPC to project
>>>>>     mapping 1:1, since the API will not take both VPC ID and
>>>>>     project ID.
>>>>>     >>
>>>>>     >> As more users want to migrate from AWS or IaaS providers
>>>>>     who want compete with AWS should be interested in this
>>>>>     compatibility.
>>>>>     >
>>>>>     > IMHO that's a tough sell. Though an AWS compatible API does
>>>>>     not need an OpenStack abstraction, we would end up with two
>>>>>     independent ways of doing similar things. That would OpenStack
>>>>>     repeating itself!
>>>>>     >
>>>>>     > Subbu
>>>>>     >
>>>>>     >
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 11
>>>>>     Date: Sun, 16 Feb 2014 09:49:17 -0800
>>>>>     From: "Allamaraju, Subbu" <subbu at subbu.org
>>>>>     <mailto:subbu at subbu.org>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [keystone] role of Domain in VPC
>>>>>             definition
>>>>>     Message-ID: <1756EFC4-ABAF-4377-B44A-219F34C3ABFA at subbu.org
>>>>>     <mailto:1756EFC4-ABAF-4377-B44A-219F34C3ABFA at subbu.org>>
>>>>>     Content-Type: text/plain; charset=iso-8859-1
>>>>>
>>>>>     Harshad,
>>>>>
>>>>>     But the key question that Ravi brought up remains though. A
>>>>>     project is a very small administrative container to manage
>>>>>     policies and resources for VPCs. We've been experimenting with
>>>>>     VPCs on OpenStack (with some mods) at work for nearly a year,
>>>>>     and came across cases where hundreds/thousands of apps in
>>>>>     equal number of projects needing to share resources and
>>>>>     policies, and project to VPC mapping did not cut.
>>>>>
>>>>>     I was wondering if there was prior discussion around the
>>>>>     mapping of AWS VPC model to OpenStack concepts like projects
>>>>>     and domains. Thanks for any pointers.
>>>>>
>>>>>     Subbu
>>>>>
>>>>>     On Feb 16, 2014, at 8:01 AM, Harshad Nakil
>>>>>     <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>> wrote:
>>>>>
>>>>>     > Yes, [1] can be done without [2] and [3].
>>>>>     > As you are well aware [2] is now merged with group policy
>>>>>     discussions.
>>>>>     > IMHO all or nothing approach will not get us anywhere.
>>>>>     > By the time we line up all our ducks in row. New
>>>>>     features/ideas/blueprints will keep Emerging.
>>>>>     >
>>>>>     > Regards
>>>>>     > -Harshad
>>>>>     >
>>>>>     >
>>>>>     > On Feb 16, 2014, at 2:30 AM, Salvatore Orlando
>>>>>     <sorlando at nicira.com <mailto:sorlando at nicira.com>> wrote:
>>>>>     >
>>>>>     >> It seems this work item is made of several blueprints, some
>>>>>     of which are not yet approved. This is true at least for the
>>>>>     Neutron blueprint regarding policy extensions.
>>>>>     >>
>>>>>     >> Since I first looked at this spec I've been wondering why
>>>>>     nova has been selected as an endpoint for network operations
>>>>>     rather than Neutron, but this probably a design/implementation
>>>>>     details whereas JC here is looking at the general approach.
>>>>>     >>
>>>>>     >> Nevertheless, my only point here is that is seems that
>>>>>     features like this need an "all-or-none" approval.
>>>>>     >> For instance, could the VPC feature be considered
>>>>>     functional if blueprint [1] is implemented, but not [2] and [3]?
>>>>>     >>
>>>>>     >> Salvatore
>>>>>     >>
>>>>>     >> [1] https://blueprints.launchpad.net/nova/+spec/aws-vpc-support
>>>>>     >> [2]
>>>>>     https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
>>>>>     >> [3]
>>>>>     https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
>>>>>     >>
>>>>>     >>
>>>>>     >> On 11 February 2014 21:45, Martin, JC <jch.martin at gmail.com
>>>>>     <mailto:jch.martin at gmail.com>> wrote:
>>>>>     >> Ravi,
>>>>>     >>
>>>>>     >> It seems that the following Blueprint
>>>>>     >> https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support
>>>>>     >>
>>>>>     >> has been approved.
>>>>>     >>
>>>>>     >> However, I cannot find a discussion with regard to the
>>>>>     merit of using project vs. domain, or other mechanism for the
>>>>>     implementation.
>>>>>     >>
>>>>>     >> I have an issue with this approach as it prevents tenants
>>>>>     within the same domain sharing the same VPC to have projects.
>>>>>     >>
>>>>>     >> As an example, if you are a large organization on AWS, it
>>>>>     is likely that you have a large VPC that will be shred by
>>>>>     multiple projects. With this proposal, we loose that
>>>>>     capability, unless I missed something.
>>>>>     >>
>>>>>     >> JC
>>>>>     >>
>>>>>     >> On Dec 19, 2013, at 6:10 PM, Ravi Chunduru
>>>>>     <ravivsn at gmail.com <mailto:ravivsn at gmail.com>> wrote:
>>>>>     >>
>>>>>     >> > Hi,
>>>>>     >> >   We had some internal discussions on role of Domain and
>>>>>     VPCs. I would like to expand and understand community thinking
>>>>>     of Keystone domain and VPCs.
>>>>>     >> >
>>>>>     >> > Is VPC equivalent to Keystone Domain?
>>>>>     >> >
>>>>>     >> > If so, as a public cloud provider - I create a Keystone
>>>>>     domain and give it to an organization which wants a virtual
>>>>>     private cloud.
>>>>>     >> >
>>>>>     >> > Now the question is if that organization wants to have
>>>>>      departments wise allocation of resources it is becoming
>>>>>     difficult to visualize with existing v3 keystone constructs.
>>>>>     >> >
>>>>>     >> > Currently, it looks like each department of an
>>>>>     organization cannot have their own resource management with in
>>>>>     the organization VPC ( LDAP based user management, network
>>>>>     management or dedicating computes etc.,) For us, Openstack
>>>>>     Project does not match the requirements of a department of an
>>>>>     organization.
>>>>>     >> >
>>>>>     >> > I hope you guessed what we wanted - Domain must have VPCs
>>>>>     and VPC to have projects.
>>>>>     >> >
>>>>>     >> > I would like to know how community see the VPC model in
>>>>>     Openstack.
>>>>>     >> >
>>>>>     >> > Thanks,
>>>>>     >> > -Ravi.
>>>>>     >> >
>>>>>     >> >
>>>>>     >> > _______________________________________________
>>>>>     >> > OpenStack-dev mailing list
>>>>>     >> > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >> >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >>
>>>>>     >>
>>>>>     >> _______________________________________________
>>>>>     >> OpenStack-dev mailing list
>>>>>     >> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >>
>>>>>     >> _______________________________________________
>>>>>     >> OpenStack-dev mailing list
>>>>>     >> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 12
>>>>>     Date: Sun, 16 Feb 2014 10:15:11 -0800
>>>>>     From: Harshad Nakil <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [keystone] role of Domain in VPC
>>>>>             definition
>>>>>     Message-ID: <4920517322402852354 at unknownmsgid>
>>>>>     Content-Type: text/plain; charset=ISO-8859-1
>>>>>
>>>>>     As said I am not disagreeing with you or Ravi or JC. I also
>>>>>     agree that
>>>>>     Openstack VPC implementation will benefit from these proposals.
>>>>>     What I am saying is it is not required AWS VPC API
>>>>>     compatibility at
>>>>>     this point.  Which is what our blueprint is all about. We are not
>>>>>     defining THE "VPC".
>>>>>     Let me ask you what changes in AWS API when you go to other model?
>>>>>     The argument is you want multiple projects in VPC. That's
>>>>>     great. But I
>>>>>     don't understand how I would specify it if my code was written
>>>>>     to use
>>>>>     AWS API.
>>>>>     The argument you want multiple external networks per VPC I
>>>>>     don't know
>>>>>     how to specify using AWS API
>>>>>     So list goes on.
>>>>>
>>>>>     May be I am missing something. If you don't want AWS compatibility
>>>>>     then that's different issue all together. And should be
>>>>>     discussed as
>>>>>     such.
>>>>>
>>>>>     Regards
>>>>>     -Harshad
>>>>>
>>>>>
>>>>>     > On Feb 16, 2014, at 9:51 AM, "Allamaraju, Subbu"
>>>>>     <subbu at subbu.org <mailto:subbu at subbu.org>> wrote:
>>>>>     >
>>>>>     > Harshad,
>>>>>     >
>>>>>     > But the key question that Ravi brought up remains though. A
>>>>>     project is a very small administrative container to manage
>>>>>     policies and resources for VPCs. We've been experimenting with
>>>>>     VPCs on OpenStack (with some mods) at work for nearly a year,
>>>>>     and came across cases where hundreds/thousands of apps in
>>>>>     equal number of projects needing to share resources and
>>>>>     policies, and project to VPC mapping did not cut.
>>>>>     >
>>>>>     > I was wondering if there was prior discussion around the
>>>>>     mapping of AWS VPC model to OpenStack concepts like projects
>>>>>     and domains. Thanks for any pointers.
>>>>>     >
>>>>>     > Subbu
>>>>>     >
>>>>>     >> On Feb 16, 2014, at 8:01 AM, Harshad Nakil
>>>>>     <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>> wrote:
>>>>>     >>
>>>>>     >> Yes, [1] can be done without [2] and [3].
>>>>>     >> As you are well aware [2] is now merged with group policy
>>>>>     discussions.
>>>>>     >> IMHO all or nothing approach will not get us anywhere.
>>>>>     >> By the time we line up all our ducks in row. New
>>>>>     features/ideas/blueprints will keep Emerging.
>>>>>     >>
>>>>>     >> Regards
>>>>>     >> -Harshad
>>>>>     >>
>>>>>     >>
>>>>>     >>> On Feb 16, 2014, at 2:30 AM, Salvatore Orlando
>>>>>     <sorlando at nicira.com <mailto:sorlando at nicira.com>> wrote:
>>>>>     >>>
>>>>>     >>> It seems this work item is made of several blueprints,
>>>>>     some of which are not yet approved. This is true at least for
>>>>>     the Neutron blueprint regarding policy extensions.
>>>>>     >>>
>>>>>     >>> Since I first looked at this spec I've been wondering why
>>>>>     nova has been selected as an endpoint for network operations
>>>>>     rather than Neutron, but this probably a design/implementation
>>>>>     details whereas JC here is looking at the general approach.
>>>>>     >>>
>>>>>     >>> Nevertheless, my only point here is that is seems that
>>>>>     features like this need an "all-or-none" approval.
>>>>>     >>> For instance, could the VPC feature be considered
>>>>>     functional if blueprint [1] is implemented, but not [2] and [3]?
>>>>>     >>>
>>>>>     >>> Salvatore
>>>>>     >>>
>>>>>     >>> [1]
>>>>>     https://blueprints.launchpad.net/nova/+spec/aws-vpc-support
>>>>>     >>> [2]
>>>>>     https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
>>>>>     >>> [3]
>>>>>     https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
>>>>>     >>>
>>>>>     >>>
>>>>>     >>> On 11 February 2014 21:45, Martin, JC
>>>>>     <jch.martin at gmail.com <mailto:jch.martin at gmail.com>> wrote:
>>>>>     >>> Ravi,
>>>>>     >>>
>>>>>     >>> It seems that the following Blueprint
>>>>>     >>> https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support
>>>>>     >>>
>>>>>     >>> has been approved.
>>>>>     >>>
>>>>>     >>> However, I cannot find a discussion with regard to the
>>>>>     merit of using project vs. domain, or other mechanism for the
>>>>>     implementation.
>>>>>     >>>
>>>>>     >>> I have an issue with this approach as it prevents tenants
>>>>>     within the same domain sharing the same VPC to have projects.
>>>>>     >>>
>>>>>     >>> As an example, if you are a large organization on AWS, it
>>>>>     is likely that you have a large VPC that will be shred by
>>>>>     multiple projects. With this proposal, we loose that
>>>>>     capability, unless I missed something.
>>>>>     >>>
>>>>>     >>> JC
>>>>>     >>>
>>>>>     >>>> On Dec 19, 2013, at 6:10 PM, Ravi Chunduru
>>>>>     <ravivsn at gmail.com <mailto:ravivsn at gmail.com>> wrote:
>>>>>     >>>>
>>>>>     >>>> Hi,
>>>>>     >>>>  We had some internal discussions on role of Domain and
>>>>>     VPCs. I would like to expand and understand community thinking
>>>>>     of Keystone domain and VPCs.
>>>>>     >>>>
>>>>>     >>>> Is VPC equivalent to Keystone Domain?
>>>>>     >>>>
>>>>>     >>>> If so, as a public cloud provider - I create a Keystone
>>>>>     domain and give it to an organization which wants a virtual
>>>>>     private cloud.
>>>>>     >>>>
>>>>>     >>>> Now the question is if that organization wants to have
>>>>>      departments wise allocation of resources it is becoming
>>>>>     difficult to visualize with existing v3 keystone constructs.
>>>>>     >>>>
>>>>>     >>>> Currently, it looks like each department of an
>>>>>     organization cannot have their own resource management with in
>>>>>     the organization VPC ( LDAP based user management, network
>>>>>     management or dedicating computes etc.,) For us, Openstack
>>>>>     Project does not match the requirements of a department of an
>>>>>     organization.
>>>>>     >>>>
>>>>>     >>>> I hope you guessed what we wanted - Domain must have VPCs
>>>>>     and VPC to have projects.
>>>>>     >>>>
>>>>>     >>>> I would like to know how community see the VPC model in
>>>>>     Openstack.
>>>>>     >>>>
>>>>>     >>>> Thanks,
>>>>>     >>>> -Ravi.
>>>>>     >>>>
>>>>>     >>>>
>>>>>     >>>> _______________________________________________
>>>>>     >>>> OpenStack-dev mailing list
>>>>>     >>>> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>>>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >>>
>>>>>     >>>
>>>>>     >>> _______________________________________________
>>>>>     >>> OpenStack-dev mailing list
>>>>>     >>> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >>>
>>>>>     >>> _______________________________________________
>>>>>     >>> OpenStack-dev mailing list
>>>>>     >>> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >> _______________________________________________
>>>>>     >> OpenStack-dev mailing list
>>>>>     >> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 13
>>>>>     Date: Sun, 16 Feb 2014 10:31:42 -0800
>>>>>     From: "Allamaraju, Subbu" <subbu at subbu.org
>>>>>     <mailto:subbu at subbu.org>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [keystone] role of Domain in VPC
>>>>>             definition
>>>>>     Message-ID: <7CD9E46E-FC0A-431B-836F-9BD02B0E417A at subbu.org
>>>>>     <mailto:7CD9E46E-FC0A-431B-836F-9BD02B0E417A at subbu.org>>
>>>>>     Content-Type: text/plain; charset=us-ascii
>>>>>
>>>>>     Harshad,
>>>>>
>>>>>     This is great. At least there is consensus on what it is and
>>>>>     what it is not. I would leave it to others to discuss merits
>>>>>     of a an AWS compat VPC API for Icehouse.
>>>>>
>>>>>     Perhaps this is a good topic to discuss at the Juno design summit.
>>>>>
>>>>>     Subbu
>>>>>
>>>>>     On Feb 16, 2014, at 10:15 AM, Harshad Nakil
>>>>>     <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>> wrote:
>>>>>
>>>>>     > As said I am not disagreeing with you or Ravi or JC. I also
>>>>>     agree that
>>>>>     > Openstack VPC implementation will benefit from these proposals.
>>>>>     > What I am saying is it is not required AWS VPC API
>>>>>     compatibility at
>>>>>     > this point.  Which is what our blueprint is all about. We
>>>>>     are not
>>>>>     > defining THE "VPC".
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 14
>>>>>     Date: Mon, 17 Feb 2014 08:20:09 +1300
>>>>>     From: Robert Collins <robertc at robertcollins.net
>>>>>     <mailto:robertc at robertcollins.net>>
>>>>>     To: Sean Dague <sean at dague.net <mailto:sean at dague.net>>
>>>>>     Cc: "OpenStack Development Mailing List \(not for usage
>>>>>     questions\)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>,
>>>>>             "<openstack-infra at lists.openstack.org
>>>>>     <mailto:openstack-infra at lists.openstack.org>>"
>>>>>             <openstack-infra at lists.openstack.org
>>>>>     <mailto:openstack-infra at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [OpenStack-Infra] [TripleO] promoting
>>>>>             devtest_seed and devtest_undercloud to voting, +
>>>>>     experimental queue
>>>>>             for nova/neutron etc.
>>>>>     Message-ID:
>>>>>     <CAJ3HoZ1LC1WqayW3o3RaPxfLC0G-Lb9zxHKftPDW=t8wnubCtQ at mail.gmail.com
>>>>>     <mailto:t8wnubCtQ at mail.gmail.com>>
>>>>>     Content-Type: text/plain; charset=ISO-8859-1
>>>>>
>>>>>     On 15 February 2014 09:58, Sean Dague <sean at dague.net
>>>>>     <mailto:sean at dague.net>> wrote:
>>>>>
>>>>>     >> Lastly, I'm going to propose a merge to infra/config to put our
>>>>>     >> undercloud story (which exercises the seed's ability to
>>>>>     deploy via
>>>>>     >> heat with bare metal) as a check experimental job on our
>>>>>     dependencies
>>>>>     >> (keystone, glance, nova, neutron) - if thats ok with those
>>>>>     projects?
>>>>>     >>
>>>>>     >> -Rob
>>>>>     >>
>>>>>     >
>>>>>     > My biggest concern with adding this to check experimental,
>>>>>     is the
>>>>>     > experimental results aren't published back until all the
>>>>>     experimental
>>>>>     > jobs are done.
>>>>>
>>>>>     If we add a new pipeline -
>>>>>     https://review.openstack.org/#/c/73863/ -
>>>>>     then we can avoid that.
>>>>>
>>>>>     > We've seen really substantial delays, plus a 5 day complete
>>>>>     outage a
>>>>>     > week ago, on the tripleo cloud. I'd like to see that much
>>>>>     more proven
>>>>>     > before it starts to impact core projects, even in experimental.
>>>>>
>>>>>     I believe that with a new pipeline it won't impact core
>>>>>     projects at all.
>>>>>
>>>>>     The outage, FWIW, was because I deleted the entire cloud, at
>>>>>     the same
>>>>>     time that we had a firedrill with some other upstream-of-us
>>>>>     issue (I
>>>>>     forget the exact one). The multi-region setup we're aiming for
>>>>>     should
>>>>>     mitigate that substantially :)
>>>>>
>>>>>
>>>>>     -Rob
>>>>>
>>>>>
>>>>>     --
>>>>>     Robert Collins <rbtcollins at hp.com <mailto:rbtcollins at hp.com>>
>>>>>     Distinguished Technologist
>>>>>     HP Converged Cloud
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 15
>>>>>     Date: Mon, 17 Feb 2014 08:25:04 +1300
>>>>>     From: Robert Collins <robertc at robertcollins.net
>>>>>     <mailto:robertc at robertcollins.net>>
>>>>>     To: "James E. Blair" <jeblair at openstack.org
>>>>>     <mailto:jeblair at openstack.org>>
>>>>>     Cc: "OpenStack Development Mailing List \(not for usage
>>>>>     questions\)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>,
>>>>>             "<openstack-infra at lists.openstack.org
>>>>>     <mailto:openstack-infra at lists.openstack.org>>"
>>>>>             <openstack-infra at lists.openstack.org
>>>>>     <mailto:openstack-infra at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [OpenStack-Infra] [TripleO] promoting
>>>>>             devtest_seed and devtest_undercloud to voting, +
>>>>>     experimental queue
>>>>>             for nova/neutron etc.
>>>>>     Message-ID:
>>>>>            
>>>>>     <CAJ3HoZ0me0xfeGArVSqLkC0SPpJwaTeK+hNYoePDdh_2FR_K9w at mail.gmail.com
>>>>>     <mailto:CAJ3HoZ0me0xfeGArVSqLkC0SPpJwaTeK%2BhNYoePDdh_2FR_K9w at mail.gmail.com>>
>>>>>     Content-Type: text/plain; charset=ISO-8859-1
>>>>>
>>>>>     On 15 February 2014 12:21, James E. Blair
>>>>>     <jeblair at openstack.org <mailto:jeblair at openstack.org>> wrote:
>>>>>
>>>>>     > You won't end up with -1's everywhere, you'll end up with
>>>>>     jobs stuck in
>>>>>     > the queue indefinitely, as we saw when the tripleo cloud failed
>>>>>     > recently.  What's worse is that now that positive check
>>>>>     results are
>>>>>     > required for enqueuing into the gate, you will also not be
>>>>>     able to merge
>>>>>     > anything.
>>>>>
>>>>>     Ok. So the cost of voting [just in tripleo] would be that a)
>>>>>     [tripleo]
>>>>>     infrastructure failures and b) breakage from other projects - both
>>>>>     things that can cause checks to fail, would stall all tripleo
>>>>>     landings
>>>>>     until rectified, or until voting is turned off via a change to
>>>>>     config
>>>>>     which makes this infra's problem.
>>>>>
>>>>>     Hmm - so from a tripleo perspective, I think we're ok with this -
>>>>>     having a clear indication that 'this is ok' is probably more
>>>>>     important
>>>>>     to us right now than the more opaque thing we have now where
>>>>>     we have
>>>>>     to expand every jenkins comment to be sure.
>>>>>
>>>>>     But- will infra be ok, if we end up having a firedrill 'please
>>>>>     make
>>>>>     this nonvoting' change to propose?
>>>>>
>>>>>     -Rob
>>>>>
>>>>>     --
>>>>>     Robert Collins <rbtcollins at hp.com <mailto:rbtcollins at hp.com>>
>>>>>     Distinguished Technologist
>>>>>     HP Converged Cloud
>>>>>
>>>>>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 16
>>>>>     Date: Sun, 16 Feb 2014 11:38:57 -0800
>>>>>     From: Ravi Chunduru <ravivsn at gmail.com <mailto:ravivsn at gmail.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] [keystone] role of Domain in VPC
>>>>>             definition
>>>>>     Message-ID:
>>>>>     <CAEgw6yuopjDfeF2vmAXtjiiA+Fz14=tbZcKV+m3eviLb=Xf5tQ at mail.gmail.com
>>>>>     <mailto:Xf5tQ at mail.gmail.com>>
>>>>>     Content-Type: text/plain; charset="utf-8"
>>>>>
>>>>>     I agree with JC that we need to pause and discuss VPC model
>>>>>     with in
>>>>>     openstack before considering AWS compatibility. As Subbu said,
>>>>>     We need this
>>>>>     discussion in Juno summit and get consensus.
>>>>>
>>>>>     Thanks,
>>>>>     -Ravi.
>>>>>
>>>>>
>>>>>     On Sun, Feb 16, 2014 at 10:31 AM, Allamaraju, Subbu
>>>>>     <subbu at subbu.org <mailto:subbu at subbu.org>> wrote:
>>>>>
>>>>>     > Harshad,
>>>>>     >
>>>>>     > This is great. At least there is consensus on what it is and
>>>>>     what it is
>>>>>     > not. I would leave it to others to discuss merits of a an
>>>>>     AWS compat VPC
>>>>>     > API for Icehouse.
>>>>>     >
>>>>>     > Perhaps this is a good topic to discuss at the Juno design
>>>>>     summit.
>>>>>     >
>>>>>     > Subbu
>>>>>     >
>>>>>     > On Feb 16, 2014, at 10:15 AM, Harshad Nakil
>>>>>     <hnakil at contrailsystems.com <mailto:hnakil at contrailsystems.com>>
>>>>>     > wrote:
>>>>>     >
>>>>>     > > As said I am not disagreeing with you or Ravi or JC. I
>>>>>     also agree that
>>>>>     > > Openstack VPC implementation will benefit from these
>>>>>     proposals.
>>>>>     > > What I am saying is it is not required AWS VPC API
>>>>>     compatibility at
>>>>>     > > this point.  Which is what our blueprint is all about. We
>>>>>     are not
>>>>>     > > defining THE "VPC".
>>>>>     >
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >
>>>>>
>>>>>
>>>>>
>>>>>     --
>>>>>     Ravi
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2ef6cc51/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 17
>>>>>     Date: Sun, 16 Feb 2014 11:54:54 -0800
>>>>>     From: Ravi Chunduru <ravivsn at gmail.com <mailto:ravivsn at gmail.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] VPC Proposal
>>>>>     Message-ID:
>>>>>            
>>>>>     <CAEgw6ysbaY6-8w_VOme5mU1k29v0dy42mvuRkTsTR7XXKw6CMg at mail.gmail.com
>>>>>     <mailto:CAEgw6ysbaY6-8w_VOme5mU1k29v0dy42mvuRkTsTR7XXKw6CMg at mail.gmail.com>>
>>>>>     Content-Type: text/plain; charset="utf-8"
>>>>>
>>>>>     IMO, VPC means to have managed set of resources not just
>>>>>     limited to
>>>>>     networks but also projects.
>>>>>     I feel its not about incrementally starting with AWS
>>>>>     compatibility, But
>>>>>     doing it right with AWS compatibility into consideration.
>>>>>
>>>>>     Thanks,
>>>>>     -Ravi.
>>>>>
>>>>>
>>>>>     On Sun, Feb 16, 2014 at 8:47 AM, Harshad Nakil
>>>>>     <hnakil at contrailsystems.com
>>>>>     <mailto:hnakil at contrailsystems.com>>wrote:
>>>>>
>>>>>     > Comments Inline
>>>>>     >
>>>>>     > Regards
>>>>>     > -Harshad
>>>>>     >
>>>>>     >
>>>>>     > On Sat, Feb 15, 2014 at 11:39 PM, Allamaraju, Subbu
>>>>>     <subbu at subbu.org <mailto:subbu at subbu.org>>wrote:
>>>>>     >
>>>>>     >> Harshad,
>>>>>     >>
>>>>>     >> Curious to know if there is a broad interest in an AWS
>>>>>     compatible API in
>>>>>     >> the community?
>>>>>     >
>>>>>     >
>>>>>     > We started looking at this as some our customers/partners
>>>>>     were interested
>>>>>     > in get AWS API compatibility. We have this blueprint and
>>>>>     code review
>>>>>     > pending for long time now. We will know based on this thread
>>>>>     wether the
>>>>>     > community is interested. But I assumed that community was
>>>>>     interested as the
>>>>>     > blueprint was approved and code review has no -1(s) for long
>>>>>     time now.
>>>>>     >
>>>>>     >
>>>>>     >> To clarify, a clear incremental path from an AWS compatible
>>>>>     API to an
>>>>>     >> OpenStack model is not clear.
>>>>>     >>
>>>>>     >
>>>>>     > In my mind AWS compatible API does not need new openstack
>>>>>     model. As more
>>>>>     > discussion happen on JC's proposal and implementation
>>>>>     becomes clear we will
>>>>>     > know how incremental is the path. But at high level there
>>>>>     two major
>>>>>     > differences
>>>>>     > 1. New first class object will be introduced which effect
>>>>>     all components
>>>>>     > 2. more than one project can be supported within VPC.
>>>>>     > But it does not change AWS API(s). So even in JC(s) model if
>>>>>     you want AWS
>>>>>     > API then we will have to keep VPC to project mapping 1:1,
>>>>>     since the API
>>>>>     > will not take both VPC ID and project ID.
>>>>>     >
>>>>>     >
>>>>>
>>>>>
>>>>>
>>>>>     > As more users want to migrate from AWS or IaaS providers who
>>>>>     want compete
>>>>>     > with AWS should be interested in this compatibility.
>>>>>     >
>>>>>     > There also seems to be terminology issue here Whats is
>>>>>     definition of "VPC"
>>>>>     > if we assume what AWS implements is "VPC"
>>>>>     > then what JC is proposing "VOS" or "VDC" (virtual openstack
>>>>>     or virtual DC)
>>>>>     > as all or most of current openstack features are available
>>>>>     to user in  this
>>>>>     > new Abstraction. I actually like this new abstraction.
>>>>>     >
>>>>>     >
>>>>>     >> Subbu
>>>>>     >>
>>>>>     >> On Feb 15, 2014, at 10:04 PM, Harshad Nakil
>>>>>     <hnakil at contrailsystems.com <mailto:hnakil at contrailsystems.com>>
>>>>>     >> wrote:
>>>>>     >>
>>>>>     >> >
>>>>>     >> > I agree with problem as defined by you and will require more
>>>>>     >> fundamental changes.
>>>>>     >> > Meanwhile many users will benefit from AWS VPC api
>>>>>     compatibility.
>>>>>     >>
>>>>>     >>
>>>>>     >> _______________________________________________
>>>>>     >> OpenStack-dev mailing list
>>>>>     >> OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >>
>>>>>     >
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>     >
>>>>>     >
>>>>>
>>>>>
>>>>>     --
>>>>>     Ravi
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/745d6d7d/attachment-0001.html>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 18
>>>>>     Date: Sun, 16 Feb 2014 12:08:15 -0800
>>>>>     From: Vishvananda Ishaya <vishvananda at gmail.com
>>>>>     <mailto:vishvananda at gmail.com>>
>>>>>     To: "OpenStack Development Mailing List (not for usage questions)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] OpenStack-dev Digest, Vol 22,
>>>>>     Issue 39
>>>>>     Message-ID: <91C14EC4-02F8-4DFC-9145-08BE2DA249AD at gmail.com
>>>>>     <mailto:91C14EC4-02F8-4DFC-9145-08BE2DA249AD at gmail.com>>
>>>>>     Content-Type: text/plain; charset="windows-1252"
>>>>>
>>>>>
>>>>>     On Feb 15, 2014, at 4:36 AM, Vinod Kumar Boppanna
>>>>>     <vinod.kumar.boppanna at cern.ch
>>>>>     <mailto:vinod.kumar.boppanna at cern.ch>> wrote:
>>>>>
>>>>>     >
>>>>>     > Dear Vish,
>>>>>     >
>>>>>     > I completely agree with you. Its like a trade off between
>>>>>     getting re-authenticated (when in a hierarchy user has
>>>>>     different roles at different levels) or parsing the entire
>>>>>     hierarchy till the leaf and include all the roles the user has
>>>>>     at each level in the scope.
>>>>>     >
>>>>>     > I am ok with any one (both has some advantages and
>>>>>     dis-advantages).
>>>>>     >
>>>>>     > But one point i didn't understand why should we parse the
>>>>>     tree above the level where the user gets authenticated (as you
>>>>>     specified in the reply). Like if user is authenticated at
>>>>>     level 3, then do we mean that the roles at level 2 and level 1
>>>>>     also should be passed?
>>>>>     > Why this is needed? I only see either we pass only the role
>>>>>     at the level the user is getting authenticated or pass the
>>>>>     roles at the level till the leaf starting from the level the
>>>>>     user is getting authenticated.
>>>>>
>>>>>
>>>>>     This is needed because in my proposed model roles are
>>>>>     inherited down the heirarchy. That means if you authenticate
>>>>>     against ProjA.ProjA2 and you have a role like ?netadmin? in
>>>>>     ProjA, you will also have it in ProjA2. So it is necessary to
>>>>>     walk up the tree to find the full list of roles.
>>>>>
>>>>>     Vish
>>>>>
>>>>>     >
>>>>>     > Regards,
>>>>>     > Vinod Kumar Boppanna
>>>>>     > ________________________________________
>>>>>     > Message: 21
>>>>>     > Date: Fri, 14 Feb 2014 10:13:59 -0800
>>>>>     > From: Vishvananda Ishaya <vishvananda at gmail.com
>>>>>     <mailto:vishvananda at gmail.com>>
>>>>>     > To: "OpenStack Development Mailing List (not for usage
>>>>>     questions)"
>>>>>     >        <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     > Subject: Re: [openstack-dev] Hierarchicical Multitenancy
>>>>>     Discussion
>>>>>     > Message-ID: <4508B18F-458B-4A3E-BA66-22F9FA47EAC0 at gmail.com
>>>>>     <mailto:4508B18F-458B-4A3E-BA66-22F9FA47EAC0 at gmail.com>>
>>>>>     > Content-Type: text/plain; charset="windows-1252"
>>>>>     >
>>>>>     > Hi Vinod!
>>>>>     >
>>>>>     > I think you can simplify the roles in the hierarchical model
>>>>>     by only passing the roles for the authenticated project and
>>>>>     above. All roles are then inherited down. This means it isn?t
>>>>>     necessary to pass a scope along with each role. The scope is
>>>>>     just passed once with the token and the project-admin role
>>>>>     (for example) would be checking to see that the user has the
>>>>>     project-admin role and that the project_id prefix matches.
>>>>>     >
>>>>>     > There is only one case that this doesn?t handle, and that is
>>>>>     when the user has one role (say member) in ProjA and
>>>>>     project-admin in ProjA2. If the user is authenticated to
>>>>>     ProjA, he can?t do project-adminy stuff for ProjA2 without
>>>>>     reauthenticating. I think this is a reasonable sacrifice
>>>>>     considering how much easier it would be to just pass the
>>>>>     parent roles instead of going through all of the children.
>>>>>     >
>>>>>     > Vish
>>>>>     >
>>>>>     > _______________________________________________
>>>>>     > OpenStack-dev mailing list
>>>>>     > OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     >
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>     -------------- next part --------------
>>>>>     A non-text attachment was scrubbed...
>>>>>     Name: signature.asc
>>>>>     Type: application/pgp-signature
>>>>>     Size: 455 bytes
>>>>>     Desc: Message signed with OpenPGP using GPGMail
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/7c320704/attachment-0001.pgp>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     Message: 19
>>>>>     Date: Sun, 16 Feb 2014 16:20:52 -0500
>>>>>     From: Mike Spreitzer <mspreitz at us.ibm.com
>>>>>     <mailto:mspreitz at us.ibm.com>>
>>>>>     To: "OpenStack Development Mailing List \(not for usage
>>>>>     questions\)"
>>>>>             <openstack-dev at lists.openstack.org
>>>>>     <mailto:openstack-dev at lists.openstack.org>>
>>>>>     Subject: Re: [openstack-dev] heat run_tests.sh fails with one huge
>>>>>             line    of      output
>>>>>     Message-ID:
>>>>>            
>>>>>     <OF81356D12.13A4D038-ON85257C81.0073FA5A-85257C81.00754456 at us.ibm.com
>>>>>     <mailto:OF81356D12.13A4D038-ON85257C81.0073FA5A-85257C81.00754456 at us.ibm.com>>
>>>>>     Content-Type: text/plain; charset="us-ascii"
>>>>>
>>>>>     Kevin, I changed no code, it was a fresh DevStack install.
>>>>>
>>>>>     Robert Collins <robertc at robertcollins.net
>>>>>     <mailto:robertc at robertcollins.net>> wrote on 02/16/2014 05:33:59
>>>>>     AM:
>>>>>     > A) [fixed in testrepository trunk] the output from subunit.run
>>>>>     > discover .... --list is being shown verbatim when an error
>>>>>     happens,
>>>>>     > rather than being machine processed and the test listings
>>>>>     elided.
>>>>>     >
>>>>>     > To use trunk - in your venv:
>>>>>     > bzr branch lp:testrepository
>>>>>     > pip install testrepository
>>>>>     >
>>>>>     > B) If you look at the end of that wall of text you'll see
>>>>>     'Failed
>>>>>     > imports' in there, and the names after that are modules that
>>>>>     failed
>>>>>     > to import - for each of those if you try to import it in python,
>>>>>     > you'll find the cause, and there's likely just one cause.
>>>>>
>>>>>     Thanks Robert, I tried following your leads but got nowhere,
>>>>>     perhaps I
>>>>>     need a few more clues.
>>>>>
>>>>>     I am not familiar with bzr (nor baz), and it wasn't obvious to
>>>>>     me how to
>>>>>     fit that into my workflow --- which was:
>>>>>     (1) install DevStack
>>>>>     (2) install libmysqlclient-dev
>>>>>     (3) install flake8
>>>>>     (4) cd /opt/stack/heat
>>>>>     (5) ./run_tests.sh
>>>>>
>>>>>     I guessed that your (A) would apply if I use a venv and go
>>>>>     between (1) the
>>>>>     `python tools/install_venv.py` inside run_tests.sh and (2) the
>>>>>     invocation
>>>>>     inside run_tests.sh of its run_tests function.  So I manually
>>>>>     invoked
>>>>>     `python tools/install_venv.py`, then entered that venv, then
>>>>>     issued your
>>>>>     commands of (A) (discovered I needed to install bzr and did
>>>>>     so), then
>>>>>     exited that venv, then invoked heat's `run_tests -V -u` to use
>>>>>     the venv
>>>>>     thus constructed.  It still produced one huge line of output.
>>>>>      Here I
>>>>>     attach a typescript of that:
>>>>>
>>>>>
>>>>>
>>>>>     You will see that the huge line still ends with something
>>>>>     about import
>>>>>     error, and now lists one additional package ---
>>>>>     heat.tests.test_neutron_firewalld.  I then tried your (B),
>>>>>     testing manual
>>>>>     imports.   All worked except for the last, which failed
>>>>>     because there is
>>>>>     indeed no such thing (why is there a spurrious 'd' at the end
>>>>>     of the
>>>>>     package name?).  Here is a typescript of that:
>>>>>
>>>>>
>>>>>
>>>>>     Thanks,
>>>>>     Mike
>>>>>     -------------- next part --------------
>>>>>     An HTML attachment was scrubbed...
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment.html>
>>>>>     -------------- next part --------------
>>>>>     An embedded and charset-unspecified text was scrubbed...
>>>>>     Name: testlog.txt
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment.txt>
>>>>>     -------------- next part --------------
>>>>>     An embedded and charset-unspecified text was scrubbed...
>>>>>     Name: testlog2.txt
>>>>>     URL:
>>>>>     <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140216/2f7188ae/attachment-0001.txt>
>>>>>
>>>>>     ------------------------------
>>>>>
>>>>>     _______________________________________________
>>>>>     OpenStack-dev mailing list
>>>>>     OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>     End of OpenStack-dev Digest, Vol 22, Issue 45
>>>>>     *********************************************
>>>>>
>>>>>     _______________________________________________
>>>>>     OpenStack-dev mailing list
>>>>>     OpenStack-dev at lists.openstack.org
>>>>>     <mailto:OpenStack-dev at lists.openstack.org>
>>>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> -- 
>>>>> ------------------------------------------
>>>>> Telles Mota Vidal Nobrega
>>>>> Bsc in Computer Science at UFCG
>>>>> Software Engineer at PulsarOpenStack Project - HP/LSD-UFCG
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org 
>>>> <mailto:OpenStack-dev at lists.openstack.org>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org 
>> <mailto:OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140225/b62506ff/attachment-0001.html>


More information about the OpenStack-dev mailing list