[openstack-dev] Hierarchicical Multitenancy Discussion

Vinod Kumar Boppanna vinod.kumar.boppanna at cern.ch
Wed Feb 5 15:38:40 UTC 2014


Hi,

Florent: 

When you say centralize the RBAC rules basing on action and target, but action can be different for different services.
For exampl,e for keystone: identity:get_endpoint  is used to get the list of endpoints

for Nova, "compute:create", to create the VMs.

So every service has their own set of operations. I support fully for centralizing the quotas.
But for policy RBAC rules, centralization may infact slow down the response. For example, if i am trying to contact the nova to get a list of VMs/Volumes created by a user,
then it does not need any information about quota to answer this, but to check RBAC rules, it has to go a centralized service which will increase the latency. 

Instead of centralized, if local policy file is there, it will lead to fast checking and responding immediately.

By the way, i have started implementing the quotas for domain using the Domain Quota Driver from Tiago (ofcourse this is not centralized and every service will keep its quota information)

The blueprint is available at 
https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api

Cheers,
Vinod Kumar Boppanna
________________________________________
From: openstack-dev-request at lists.openstack.org [openstack-dev-request at lists.openstack.org]
Sent: 05 February 2014 16:20
To: openstack-dev at lists.openstack.org
Subject: OpenStack-dev Digest, Vol 22, Issue 13

Send OpenStack-dev mailing list submissions to
        openstack-dev at lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
        openstack-dev-request at lists.openstack.org

You can reach the person managing the list at
        openstack-dev-owner at lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-dev digest..."


Today's Topics:

   1. Re: [rally] Proposing changes in Rally core team (Pierre Padrixe)
   2. Re: [neutron][ml2] Port binding information, transactions,
      and concurrency (Henry Gessau)
   3. Re: [savanna] Specific job type for streaming mapreduce? (and
      someday pipes) (Trevor McKay)
   4. Re: [Openstack-docs] Conventions on naming (Andreas Jaeger)
   5. Re: savann-ci, Re: [savanna] Alembic migrations and absence
      of DROP column in sqlite (Sergey Lukjanov)
   6. [TripleO][Tuskar] Icehouse Requirements (Tzu-Mainn Chen)
   7. Re: [savanna] Choosing provisioning engine during cluster
      launch (Sergey Lukjanov)
   8. Re: Hierarchicical Multitenancy Discussion (Florent Flament)
   9. Re: [nova][ceilometer] ceilometer unit tests broke because of
      a nova patch (Dan Smith)
  10. Re: [keystone][nova] Re: Hierarchicical Multitenancy
      Discussion (Andrew Laski)
  11. Re: The simplified blueprint for PCI extra attributes and
      SR-IOV NIC blueprint (Robert Li (baoli))
  12. Re: olso.config error on running Devstack (Doug Hellmann)
  13. Re: about the bp cpu-entitlement (Oshrit Feder)
  14. update an instance IP address in openstack (Abdul Hannan Kanji)
  15. Re: pep8 gating fails due to      tools/config/check_uptodate.sh
      (Doug Hellmann)


----------------------------------------------------------------------

Message: 1
Date: Wed, 5 Feb 2014 14:16:49 +0100
From: Pierre Padrixe <pierre.padrixe at gmail.com>
To: hugh at wherenow.org,  "OpenStack Development Mailing List (not for
        usage questions)" <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [rally] Proposing changes in Rally core
        team
Message-ID:
        <CAAzS+a5c2tP-J+U1T-f8oKQU6CK+3R=i__3MsRGzZ_Yy1y89Hg at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Thank you hugh and congratulations for your new assignment as core
reviewer, you're doing a great job!

Regards,
Pierre.

2014-02-05 Hugh Saunders <hugh at wherenow.org>:

> Thanks Boris, Sergey, Oleg & Ilya,
> Rally can be hard to keep up with (rebase, rebase, rebase, merge) but that
> development pace also makes it exciting, each time you run rally, something
> will have improved! This morning I was awed by Pierre's atomic actions
> patches - great!
>
> Thanks for appointing me as a core team member, I will keep an eye on
> reviews and trello, see you all in IRC.
>
> --
> Hugh Saunders
>
>
> On 5 February 2014 12:35, Boris Pavlovic <boris at pavlovic.me> wrote:
>
>> Hugh,
>>
>> welcome to Rally core team!
>>
>>
>> Best regards,
>> Boris Pavlovic
>>
>>
>>
>> On Wed, Feb 5, 2014 at 3:17 PM, Ilya Kharin <ikharin at mirantis.com> wrote:
>>
>>> +1 for Hugh
>>>
>>>
>>> On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick <
>>> sskripnick at mirantis.com> wrote:
>>>
>>>>
>>>> +1 for Hugh, but IMO no need to rush with Alexei's removal
>>>>
>>>> Hi stackers,
>>>>
>>>> I would like to:
>>>>
>>>> 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good
>>>> reviews (and always testing patches=) ):
>>>> http://stackalytics.com/report/reviews/rally/30
>>>>
>>>> 2) Remove Alexei from core team, because unfortunately he is not able
>>>> to work on Rally at this moment. Thank you Alexei for all work that you
>>>> have done.
>>>>
>>>>
>>>> Thoughts?
>>>>
>>>>
>>>> Best regards,
>>>> Boris Pavlovic
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Sergey Skripnick
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/8a49a259/attachment-0001.html>

------------------------------

Message: 2
Date: Wed, 05 Feb 2014 09:10:16 -0500
From: Henry Gessau <gessau at cisco.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][ml2] Port binding information,
        transactions, and concurrency
Message-ID: <52F24648.3000005 at cisco.com>
Content-Type: text/plain; charset=ISO-8859-1

Bob, this is fantastic, I really appreciate all the detail. A couple of
questions ...

On Wed, Feb 05, at 2:16 am, Robert Kukura <rkukura at redhat.com> wrote:

> A couple of interrelated issues with the ML2 plugin's port binding have
> been discussed over the past several months in the weekly ML2 meetings.
> These effect drivers being implemented for icehouse, and therefore need
> to be addressed in icehouse:
>
> * MechanismDrivers need detailed information about all binding changes,
> including unbinding on port deletion
> (https://bugs.launchpad.net/neutron/+bug/1276395)
> * MechanismDrivers' bind_port() methods are currently called inside
> transactions, but in some cases need to make remote calls to controllers
> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
> * Semantics of concurrent port binding need to be defined if binding is
> moved outside the triggering transaction.
>
> I've taken the action of writing up a unified proposal for resolving
> these issues, which follows...
>
> 1) An original_bound_segment property will be added to PortContext. When
> the MechanismDriver update_port_precommit() and update_port_postcommit()
> methods are called and a binding previously existed (whether its being
> torn down or not), this property will provide access to the network
> segment used by the old binding. In these same cases, the portbinding
> extension attributes (such as binding:vif_type) for the old binding will
> be available via the PortContext.original property. It may be helpful to
> also add bound_driver and original_bound_driver properties to
> PortContext that behave similarly to bound_segment and
> original_bound_segment.
>
> 2) The MechanismDriver.bind_port() method will no longer be called from
> within a transaction. This will allow drivers to make remote calls on
> controllers or devices from within this method without holding a DB
> transaction open during those calls. Drivers can manage their own
> transactions within bind_port() if needed, but need to be aware that
> these are independent from the transaction that triggered binding, and
> concurrent changes to the port could be occurring.
>
> 3) Binding will only occur after the transaction that triggers it has
> been completely processed and committed. That initial transaction will
> unbind the port if necessary. Four cases for the initial transaction are
> possible:
>
> 3a) In a port create operation, whether the binding:host_id is supplied
> or not, all drivers' port_create_precommit() methods will be called, the
> initial transaction will be committed, and all drivers'
> port_create_postcommit() methods will be called. The drivers will see
> this as creation of a new unbound port, with PortContext properties as
> shown. If a value for binding:host_id was supplied, binding will occur
> afterwards as described in 4 below.
>
> PortContext.original: None
> PortContext.original_bound_segment: None
> PortContext.original_bound_driver: None
> PortContext.current['binding:host_id']: supplied value or None
> PortContext.current['binding:vif_type']: 'unbound'
> PortContext.bound_segment: None
> PortContext.bound_driver: None
>
> 3b) Similarly, in a port update operation on a previously unbound port,
> all drivers' port_update_precommit() and port_update_postcommit()
> methods will be called, with PortContext properies as shown. If a value
> for binding:host_id was supplied, binding will occur afterwards as
> described in 4 below.
>
> PortContext.original['binding:host_id']: previous value or None
> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
> PortContext.original_bound_segment: None
> PortContext.original_bound_driver: None
> PortContext.current['binding:host_id']: current value or None
> PortContext.current['binding:vif_type']: 'unbound'
> PortContext.bound_segment: None
> PortContext.bound_driver: None
>
> 3c) In a port update operation on a previously bound port that does not
> trigger unbinding or rebinding, all drivers' update_port_precommit() and
> update_port_postcommit() methods will be called with PortContext
> properties reflecting unchanged binding states as shown.
>
> PortContext.original['binding:host_id']: previous value
> PortContext.original['binding:vif_type']: previous value
> PortContext.original_bound_segment: previous value
> PortContext.original_bound_driver: previous value
> PortContext.current['binding:host_id']: previous value
> PortContext.current['binding:vif_type']: previous value
> PortContext.bound_segment: previous value
> PortContext.bound_driver: previous value
>
> 3d) In a the port update operation on a previously bound port that does
> trigger unbinding or rebinding, all drivers' update_port_precommit() and
> update_port_postcommit() methods will be called with PortContext
> properties reflecting the previously bound and currently unbound binding
> states as shown. If a value for binding:host_id was supplied, binding
> will occur afterwards as described in 4 below.
>
> PortContext.original['binding:host_id']: previous value
> PortContext.original['binding:vif_type']: previous value
> PortContext.original_bound_segment: previous value
> PortContext.original_bound_driver: previous value
> PortContext.current['binding:host_id']: new or current value
> PortContext.current['binding:vif_type']: 'unbound'
> PortContext.bound_segment: None
> PortContext.bound_driver: None
>
> 4) If a port create or update operation triggers binding or rebinding,
> it is attempted after the initial transaction is processed and committed
> as described in 3 above. The binding process itself is just as before,
> except it happens after and outside the transaction. Since binding now
> occurs outside the transaction, its possible that multiple threads or
> processes could concurrently attempt to bind the same port, although
> this is should be a rare occurrence. Rather than trying to prevent this
> with some sort of distributed lock or complicated state machine,
> concurrent attempts to bind are allowed to proceed in parallel. When a
> thread completes its attempt to bind (either successfully or
> unsuccessfully) it then performs a second transaction to update the DB
> with the result of its binding attempt. When doing so, it checks to see
> if some other thread has already committed relevant changes to the port
> between the two transactions. There are three possible cases:
>
> 4a) If the thread's binding attempt succeeded, and no other thread has
> committed either a new binding or changes that invalidate this thread's
> new binding between the two transactions, the thread commits its own
> binding results, calling all drivers' update_port_precommit() and
> update_port_postcommit() methods with PortContext properties reflecting
> the new binding as shown. It then returns the updated port dictionary to
> the caller.
>
> PortContext.original['binding:host_id']: previous value
> PortContext.original['binding:vif_type']: 'unbound'
> PortContext.original_bound_segment: None
> PortContext.original_bound_driver: None
> PortContext.current['binding:host_id']: previous value

Are you not expecting/allowing the host_id to change in this scenario? Why?

> PortContext.current['binding:vif_type']: new value
> PortContext.bound_segment: new value
> PortContext.bound_driver: new value
>
> 4b) If the thread's binding attempt either succeeded or failed, but some
> other thread has committed a new successful binding between the two
> transactions, the thread returns a port dictionary with attributes based
> on the DB state from the new transaction, including the other thread's
> binding and any other port state changes. No further calls to mechanism
> drivers are needed here since they are the responsibility of the other
> thread that bound the port.
>
> 4c) If some other thread committed changes to the port's
> binding-relevant state but has not committed a successful binding, then
> this thread attempts to bind again using that updated state, repeating 4.
>
> 5) Port deletion no longer does anything special to unbind the port. All
> drivers' delete_port_precommit() and delete_port_postcommit() methods
> are called with PortContext properties reflecting the binding state
> before deletion as shown.
>
> PortContext.original: None
> PortContext.original_bound_segment: None
> PortContext.original_bound_driver: None
> PortContext.current['binding:host_id']: previous value or None
> PortContext.current['binding:vif_type']: previous value
> PortContext.bound_segment: previous value
> PortContext.bound_driver: previous value

Could this part of the port deletion also be done by port update?

>
> 6) In order to ensure successful bindings are created and returned
> whenever possible, the get port and get ports operations also attempt to
> bind the port as in 4 above when binding:host_id is available but there
> is no existing successful binding in the DB.
>
> 7) We can either eliminate MechanismDriver.unbind_port(), or call it on
> the previously bound driver within the transaction in 3d and 5 above. If
> we do keep it, the old binding state must be consistently reflected in
> the PortContext as either current or original state, TBD. Since all
> drivers see unbinding as a port update where current_bound_segment is
> None and original_bound_segment is not None, calling unbind_port() seems
> redundant.
>
> 8) If bindings shouldn't spontaneously become invalid, maybe we can
> eliminate MechanismDriver.validate_bound_port().
>
>
> I've provided a lot of details, and the above may seem complicated. But
> I think its actually much more consistent and predictable than the
> current port binding code, and implementation should be straightforward.
>
> -Bob



------------------------------

Message: 3
Date: Wed, 05 Feb 2014 09:11:14 -0500
From: Trevor McKay <tmckay at redhat.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [savanna] Specific job type for streaming
        mapreduce? (and someday pipes)
Message-ID: <1391609474.5141.8.camel at tmckaylt.rdu.redhat.com>
Content-Type: text/plain; charset="UTF-8"

Okay,

  Thanks. I'll make a draft CR that sets up Savanna for dotted names,
and one that uses dotted names with streaming.

Best,

Trevor

On Wed, 2014-02-05 at 15:58 +0400, Sergey Lukjanov wrote:
> I like the dot-separated name. There are several reasons for it:
>
>
> * it'll not require changes in all Savanna subprojects;
> * eventually we'd like to use not only Oozie for EDP (for example, if
> we'll support Twitter Storm) and this new tools could require
> additional 'subtypes'.
>
>
> Thanks for catching this.
>
>
> On Tue, Feb 4, 2014 at 10:47 PM, Trevor McKay <tmckay at redhat.com>
> wrote:
>         Thanks Andrew.
>
>         My author thought, which is in between, is to allow dotted
>         types.
>         "MapReduce.streaming" for example.
>
>         This gives you the subtype flavor but keeps all the APIs the
>         same.
>         We just need a wrapper function to separate them when we
>         compare types.
>
>         Best,
>
>         Trevor
>
>         On Mon, 2014-02-03 at 14:57 -0800, Andrew Lazarev wrote:
>         > I see two points:
>         > * having Savanna types mapped to Oozie action types is
>         intuitive for
>         > hadoop users and this is something we would like to keep
>         > * it is hard to distinguish different kinds of one job type
>         >
>         >
>         > Adding 'subtype' field will solve both problems. Having it
>         optional
>         > will not break backward compatibility. Adding database
>         migration
>         > script is also pretty straightforward.
>         >
>         >
>         > Summarizing, my vote is on "subtype" field.
>         >
>         >
>         > Thanks,
>         > Andrew.
>         >
>         >
>         > On Mon, Feb 3, 2014 at 2:10 PM, Trevor McKay
>         <tmckay at redhat.com>
>         > wrote:
>         >
>         >         I was trying my best to avoid adding extra job types
>         to
>         >         support
>         >         mapreduce variants like streaming or mapreduce with
>         pipes, but
>         >         it seems
>         >         that adding the types is the simplest solution.
>         >
>         >         On the API side, Savanna can live without a specific
>         job type
>         >         by
>         >         examining the data in the job record.
>          Presence/absence of
>         >         certain
>         >         things, or null values, etc, can provide adequate
>         indicators
>         >         to what
>         >         kind of mapreduce it is.  Maybe a little bit subtle.
>         >
>         >         But for the UI, it seems that explicit knowledge of
>         what the
>         >         job is
>         >         makes things easier and better for the user.  When a
>         user
>         >         creates a
>         >         streaming mapreduce job and the UI is aware of the
>         type later
>         >         on at job
>         >         launch, the user can be prompted to provide the
>         right configs
>         >         (i.e., the
>         >         streaming mapper and reducer values).
>         >
>         >         The explicit job type also supports validation
>         without having
>         >         to add
>         >         extra flags (which impacts the savanna client, and
>         the JSON,
>         >         etc). For
>         >         example, a streaming mapreduce job does not require
>         any
>         >         specified
>         >         libraries so the fact that it is meant to be a
>         streaming job
>         >         needs to be
>         >         known at job creation time.
>         >
>         >         So, to that end, I propose that we add a
>         MapReduceStreaming
>         >         job type,
>         >         and probably at some point we will have
>         MapReducePiped too.
>         >         It's
>         >         possible that we might have other job types in the
>         future too
>         >         as the
>         >         feature set grows.
>         >
>         >         There was an effort to make Savanna job types
>         parallel Oozie
>         >         action
>         >         types, but in this case that's just not possible
>         without
>         >         introducing a
>         >         "subtype" field in the job record, which leads to a
>         database
>         >         migration
>         >         script and savanna client changes.
>         >
>         >         What do you think?
>         >
>         >         Best,
>         >
>         >         Trevor
>         >
>         >
>         >
>         >         _______________________________________________
>         >         OpenStack-dev mailing list
>         >         OpenStack-dev at lists.openstack.org
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>         >
>         >
>         > _______________________________________________
>         > OpenStack-dev mailing list
>         > OpenStack-dev at lists.openstack.org
>         >
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>         _______________________________________________
>         OpenStack-dev mailing list
>         OpenStack-dev at lists.openstack.org
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





------------------------------

Message: 4
Date: Wed, 05 Feb 2014 15:17:39 +0100
From: Andreas Jaeger <aj at suse.com>
To: Mark McLoughlin <markmc at redhat.com>,  "OpenStack Development
        Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Cc: Jonathan Bryce <jonathan at openstack.org>
Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
Message-ID: <52F24803.5070403 at suse.com>
Content-Type: text/plain; charset=ISO-8859-1

On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
> On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
>> Steve Gordon wrote:
>>>> From: "Anne Gentle" <anne.gentle at rackspace.com>
>>>> Based on today's Technical Committee meeting and conversations with the
>>>> OpenStack board members, I need to change our Conventions for service names
>>>> at
>>>> https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
>>>> .
>>>>
>>>> Previously we have indicated that Ceilometer could be named OpenStack
>>>> Telemetry and Heat could be named OpenStack Orchestration. That's not the
>>>> case, and we need to change those names.
>>>>
>>>> To quote the TC meeting, ceilometer and heat are "other modules" (second
>>>> sentence from 4.1 in
>>>> http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
>>>> distributed with the Core OpenStack Project.
>>>>
>>>> Here's what I intend to change the wiki page to:
>>>>  Here's the list of project and module names and their official names and
>>>> capitalization:
>>>>
>>>> Ceilometer module
>>>> Cinder: OpenStack Block Storage
>>>> Glance: OpenStack Image Service
>>>> Heat module
>>>> Horizon: OpenStack dashboard
>>>> Keystone: OpenStack Identity Service
>>>> Neutron: OpenStack Networking
>>>> Nova: OpenStack Compute
>>>> Swift: OpenStack Object Storage
>>
>> Small correction. The TC had not indicated that Ceilometer could be
>> named "OpenStack Telemetry" and Heat could be named "OpenStack
>> Orchestration". We formally asked[1] the board to allow (or disallow)
>> that naming (or more precisely, that use of the trademark).
>>
>> [1]
>> https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
>>
>> We haven't got a formal and clear answer from the board on that request
>> yet. I suspect they are waiting for progress on DefCore before deciding.
>>
>> If you need an answer *now* (and I suspect you do), it might make sense
>> to ask foundation staff/lawyers about using those OpenStack names with
>> the current state of the bylaws and trademark usage rules, rather than
>> the hypothetical future state under discussion.
>
> Basically, yes - I think having the Foundation confirm that it's
> appropriate to use "OpenStack Telemetry" in the docs is the right thing.
>
> There's an awful lot of confusion about the subject and, ultimately,
> it's the Foundation staff who are responsible for enforcing (and giving
> advise to people on) the trademark usage rules. I've cc-ed Jonathan so
> he knows about this issue.
>
> But FWIW, the TC's request is asking for Ceilometer and Heat to be
> allowed use their "Telemetry" and "Orchestration" names in *all* of the
> circumstances where e.g. Nova is allowed use its "Compute" name.
>
> Reading again this clause in the bylaws:
>
>   "The other modules which are part of the OpenStack Project, but
>    not the Core OpenStack Project may not be identified using the
>    OpenStack trademark except when distributed with the Core OpenStack
>    Project."
>
> it could well be said that this case of naming conventions in the docs
> for the entire OpenStack Project falls under the "distributed with" case
> and it is perfectly fine to refer to "OpenStack Telemetry" in the docs.
> I'd really like to see the Foundation staff give their opinion on this,
> though.

What Steve is asking IMO is whether we have to change "OpenStack
Telemetry" to "Ceilometer module" or whether we can just say "Telemetry"
without the OpenStack in front of it,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB16746 (AG N?rnberg)
    GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126



------------------------------

Message: 5
Date: Wed, 5 Feb 2014 18:22:06 +0400
From: Sergey Lukjanov <slukjanov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] savann-ci, Re: [savanna] Alembic
        migrations and absence of DROP column in sqlite
Message-ID:
        <CA+GZd7_30r_d0x=Vv3vfCtSzGoooZdtikP3xVioAFydQGBPCsg at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

It's about integration tests that aren't db-specific, so, just
DATABASE/connection should be fixed ;)


On Wed, Feb 5, 2014 at 4:33 PM, Alexei Kornienko <alexei.kornienko at gmail.com
> wrote:

>  Hi
>
>
> I'm currently working on moving on the MySQL for savanna-ci
>
> We are working on same task in ceilometer so maybe you could use some of
> our patches as reference:
>
> https://review.openstack.org/#/c/59489/
> https://review.openstack.org/#/c/63049/
>
> Regards,
> Alexei
>
>
> On 02/05/2014 02:06 PM, Sergey Kolekonov wrote:
>
> I'm currently working on moving on the MySQL for savanna-ci
>
>
> On Wed, Feb 5, 2014 at 3:53 PM, Sergey Lukjanov <slukjanov at mirantis.com>wrote:
>
>> Agreed, let's move on to the MySQL for savanna-ci to run integration
>> tests against production-like DB.
>>
>>
>> On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev <alazarev at mirantis.com>wrote:
>>
>>> Since sqlite is not in the list of "databases that would be used in
>>> production", CI should use other DB for testing.
>>>
>>>  Andrew.
>>>
>>>
>>> On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov <aignatov at mirantis.com
>>> > wrote:
>>>
>>>> Indeed. We should create a bug around that and move our savanna-ci to
>>>> mysql.
>>>>
>>>> Regards,
>>>> Alexander Ignatov
>>>>
>>>>
>>>>
>>>> On 05 Feb 2014, at 01:01, Trevor McKay <tmckay at redhat.com> wrote:
>>>>
>>>> > This brings up an interesting problem:
>>>> >
>>>> > In https://review.openstack.org/#/c/70420/ I've added a migration
>>>> that
>>>> > uses a drop column for an upgrade.
>>>> >
>>>> > But savann-ci is apparently using a sqlite database to run.  So it
>>>> can't
>>>> > possibly pass.
>>>> >
>>>> > What do we do here?  Shift savanna-ci tests to non sqlite?
>>>> >
>>>> > Trevor
>>>> >
>>>> > On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote:
>>>> >> Hi all,
>>>> >>
>>>> >> My two cents.
>>>> >>
>>>> >>> 2) Extend alembic so that op.drop_column() does the right thing
>>>> >> We could, but should we?
>>>> >>
>>>> >> The only reason alembic doesn't support these operations for SQLite
>>>> >> yet is that SQLite lacks proper support of ALTER statement. For
>>>> >> sqlalchemy-migrate we've been providing a work-around in the form of
>>>> >> recreating of a table and copying of all existing rows (which is a
>>>> >> hack, really).
>>>> >>
>>>> >> But to be able to recreate a table, we first must have its
>>>> definition.
>>>> >> And we've been relying on SQLAlchemy schema reflection facilities for
>>>> >> that. Unfortunately, this approach has a few drawbacks:
>>>> >>
>>>> >> 1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
>>>> >> unique constraints, which means the recreated table won't have them;
>>>> >>
>>>> >> 2) special care must be taken in 'edge' cases (e.g. when you want to
>>>> >> drop a BOOLEAN column, you must also drop the corresponding CHECK
>>>> (col
>>>> >> in (0, 1)) constraint manually, or SQLite will raise an error when
>>>> the
>>>> >> table is recreated without the column being dropped)
>>>> >>
>>>> >> 3) special care must be taken for 'custom' type columns (it's got
>>>> >> better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
>>>> >> definitions of reflected BIGINT columns manually for each
>>>> >> column.drop() call)
>>>> >>
>>>> >> 4) schema reflection can't be performed when alembic migrations are
>>>> >> run in 'offline' mode (without connecting to a DB)
>>>> >> ...
>>>> >> (probably something else I've forgotten)
>>>> >>
>>>> >> So it's totally doable, but, IMO, there is no real benefit in
>>>> >> supporting running of schema migrations for SQLite.
>>>> >>
>>>> >>> ...attempts to drop schema generation based on models in favor of
>>>> migrations
>>>> >>
>>>> >> As long as we have a test that checks that the DB schema obtained by
>>>> >> running of migration scripts is equal to the one obtained by calling
>>>> >> metadata.create_all(), it's perfectly OK to use model definitions to
>>>> >> generate the initial DB schema for running of unit-tests as well as
>>>> >> for new installations of OpenStack (and this is actually faster than
>>>> >> running of migration scripts). ... and if we have strong objections
>>>> >> against doing metadata.create_all(), we can always use migration
>>>> >> scripts for both new installations and upgrades for all DB backends,
>>>> >> except SQLite.
>>>> >>
>>>> >> Thanks,
>>>> >> Roman
>>>> >>
>>>> >> On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
>>>> >> <enikanorov at mirantis.com> wrote:
>>>> >>> Boris,
>>>> >>>
>>>> >>> Sorry for the offtopic.
>>>> >>> Is switching to model-based schema generation is something decided?
>>>> I see
>>>> >>> the opposite: attempts to drop schema generation based on models in
>>>> favor of
>>>> >>> migrations.
>>>> >>> Can you point to some discussion threads?
>>>> >>>
>>>> >>> Thanks,
>>>> >>> Eugene.
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic <
>>>> bpavlovic at mirantis.com>
>>>> >>> wrote:
>>>> >>>>
>>>> >>>> Jay,
>>>> >>>>
>>>> >>>> Yep we shouldn't use migrations for sqlite at all.
>>>> >>>>
>>>> >>>> The major issue that we have now is that we are not able to ensure
>>>> that DB
>>>> >>>> schema created by migration & models are same (actually they are
>>>> not same).
>>>> >>>>
>>>> >>>> So before dropping support of migrations for sqlite & switching to
>>>> model
>>>> >>>> based created schema we should add tests that will check that
>>>> model &
>>>> >>>> migrations are synced.
>>>> >>>> (we are working on this)
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Best regards,
>>>> >>>> Boris Pavlovic
>>>> >>>>
>>>> >>>>
>>>> >>>> On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev <
>>>> alazarev at mirantis.com>
>>>> >>>> wrote:
>>>> >>>>>
>>>> >>>>> Trevor,
>>>> >>>>>
>>>> >>>>> Such check could be useful on alembic side too. Good opportunity
>>>> for
>>>> >>>>> contribution.
>>>> >>>>>
>>>> >>>>> Andrew.
>>>> >>>>>
>>>> >>>>>
>>>> >>>>> On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay <tmckay at redhat.com>
>>>> wrote:
>>>> >>>>>>
>>>> >>>>>> Okay,  I can accept that migrations shouldn't be supported on
>>>> sqlite.
>>>> >>>>>>
>>>> >>>>>> However, if that's the case then we need to fix up
>>>> savanna-db-manage so
>>>> >>>>>> that it checks the db connection info and throws a polite error
>>>> to the
>>>> >>>>>> user for attempted migrations on unsupported platforms. For
>>>> example:
>>>> >>>>>>
>>>> >>>>>> "Database migrations are not supported for sqlite"
>>>> >>>>>>
>>>> >>>>>> Because, as a developer, when I see a sql error trace as the
>>>> result of
>>>> >>>>>> an operation I assume it's broken :)
>>>> >>>>>>
>>>> >>>>>> Best,
>>>> >>>>>>
>>>> >>>>>> Trevor
>>>> >>>>>>
>>>> >>>>>> On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
>>>> >>>>>>> On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
>>>> >>>>>>>> I was playing with alembic migration and discovered that
>>>> >>>>>>>> op.drop_column() doesn't work with sqlite.  This is because
>>>> sqlite
>>>> >>>>>>>> doesn't support dropping a column (broken imho, but that's
>>>> another
>>>> >>>>>>>> discussion).  Sqlite throws a syntax error.
>>>> >>>>>>>>
>>>> >>>>>>>> To make this work with sqlite, you have to copy the table to a
>>>> >>>>>>>> temporary
>>>> >>>>>>>> excluding the column(s) you don't want and delete the old one,
>>>> >>>>>>>> followed
>>>> >>>>>>>> by a rename of the new table.
>>>> >>>>>>>>
>>>> >>>>>>>> The existing 002 migration uses op.drop_column(), so I'm
>>>> assuming
>>>> >>>>>>>> it's
>>>> >>>>>>>> broken, too (I need to check what the migration test is
>>>> doing).  I
>>>> >>>>>>>> was
>>>> >>>>>>>> working on an 003.
>>>> >>>>>>>>
>>>> >>>>>>>> How do we want to handle this?  Three good options I can think
>>>> of:
>>>> >>>>>>>>
>>>> >>>>>>>> 1) don't support migrations for sqlite (I think "no", but
>>>> maybe)
>>>> >>>>>>>>
>>>> >>>>>>>> 2) Extend alembic so that op.drop_column() does the right thing
>>>> >>>>>>>> (more
>>>> >>>>>>>> open-source contributions for us, yay :) )
>>>> >>>>>>>>
>>>> >>>>>>>> 3) Add our own wrapper in savanna so that we have a
>>>> drop_column()
>>>> >>>>>>>> method
>>>> >>>>>>>> that wraps copy/rename.
>>>> >>>>>>>>
>>>> >>>>>>>> Ideas, comments?
>>>> >>>>>>>
>>>> >>>>>>> Migrations should really not be run against SQLite at all --
>>>> only on
>>>> >>>>>>> the
>>>> >>>>>>> databases that would be used in production. I believe the
>>>> general
>>>> >>>>>>> direction of the contributor community is to be consistent
>>>> around
>>>> >>>>>>> testing of migrations and to not run migrations at all in unit
>>>> tests
>>>> >>>>>>> (which use SQLite).
>>>> >>>>>>>
>>>> >>>>>>> Boris (cc'd) may have some more to say on this topic.
>>>> >>>>>>>
>>>> >>>>>>> Best,
>>>> >>>>>>> -jay
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>> _______________________________________________
>>>> >>>>>>> OpenStack-dev mailing list
>>>> >>>>>>> OpenStack-dev at lists.openstack.org
>>>> >>>>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>>>>>
>>>> >>>>>>
>>>> >>>>>>
>>>> >>>>>> _______________________________________________
>>>> >>>>>> OpenStack-dev mailing list
>>>> >>>>>> OpenStack-dev at lists.openstack.org
>>>> >>>>>>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>>>>
>>>> >>>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> _______________________________________________
>>>> >>>> OpenStack-dev mailing list
>>>> >>>> OpenStack-dev at lists.openstack.org
>>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>>>
>>>> >>>
>>>> >>>
>>>> >>> _______________________________________________
>>>> >>> OpenStack-dev mailing list
>>>> >>> OpenStack-dev at lists.openstack.org
>>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >>>
>>>> >>
>>>> >> _______________________________________________
>>>> >> OpenStack-dev mailing list
>>>> >> OpenStack-dev at lists.openstack.org
>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> >
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > OpenStack-dev mailing list
>>>> > OpenStack-dev at lists.openstack.org
>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>>   --
>>  Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> _______________________________________________
> OpenStack-dev mailing listOpenStack-dev at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/76e3d2c2/attachment-0001.html>

------------------------------

Message: 6
Date: Wed, 5 Feb 2014 09:27:04 -0500 (EST)
From: Tzu-Mainn Chen <tzumainn at redhat.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements
Message-ID:
        <384958145.13887219.1391610424244.JavaMail.root at redhat.com>
Content-Type: text/plain; charset=utf-8

Hi,

In parallel to Jarda's updated wireframes, and based on various discussions over the past
weeks, here are the updated Tuskar requirements for Icehouse:

https://wiki.openstack.org/wiki/TripleO/TuskarIcehouseRequirements

Any feedback is appreciated.  Thanks!

Tzu-Mainn Chen



------------------------------

Message: 7
Date: Wed, 5 Feb 2014 18:49:53 +0400
From: Sergey Lukjanov <slukjanov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [savanna] Choosing provisioning engine
        during cluster launch
Message-ID:
        <CA+GZd7-RGtTNZM9S-0n3faUUMdBNr329-BdjvQqfCbW-+Jkh=A at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

It sounds little useful for dev/testing, I'm not really think that it's
needed, but not -1 such addition to the REST API.


On Thu, Jan 30, 2014 at 7:52 PM, Trevor McKay <tmckay at redhat.com> wrote:

> My mistake, it's already there.  I missed the distinction between set on
> startup and set per cluster.
>
> Trev
>
> On Thu, 2014-01-30 at 10:50 -0500, Trevor McKay wrote:
> > +1
> >
> > How about an undocumented config?
> >
> > Trev
> >
> > On Thu, 2014-01-30 at 09:24 -0500, Matthew Farrellee wrote:
> > > i imagine this is something that can be useful in a development and
> > > testing environment, especially during the transition period from
> direct
> > > to heat. so having the ability is not unreasonable, but i wouldn't
> > > expose it to users via the dashboard (maybe not even directly in the
> cli)
> > >
> > > generally i want to reduce the number of parameters / questions the
> user
> > > is asked
> > >
> > > best,
> > >
> > >
> > > matt
> > >
> > > On 01/30/2014 04:42 AM, Dmitry Mescheryakov wrote:
> > > > I agree with Andrew. I see no value in letting users select how their
> > > > cluster is provisioned, it will only make interface a little bit more
> > > > complex.
> > > >
> > > > Dmitry
> > > >
> > > >
> > > > 2014/1/30 Andrew Lazarev <alazarev at mirantis.com
> > > > <mailto:alazarev at mirantis.com>>
> > > >
> > > >     Alexander,
> > > >
> > > >     What is the purpose of exposing this to user side? Both engines
> must
> > > >     do exactly the same thing and they exist in the same time only
> for
> > > >     transition period until heat engine is stabilized. I don't see
> any
> > > >     value in proposed option.
> > > >
> > > >     Andrew.
> > > >
> > > >
> > > >     On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov
> > > >     <aignatov at mirantis.com <mailto:aignatov at mirantis.com>> wrote:
> > > >
> > > >         Today Savanna has two provisioning engines, heat and old one
> > > >         known as 'direct'.
> > > >         Users can choose which engine will be used by setting special
> > > >         parameter in 'savanna.conf'.
> > > >
> > > >         I have an idea to give an ability for users to define
> > > >         provisioning engine
> > > >         not only when savanna is started but when new cluster is
> > > >         launched. The idea is simple.
> > > >         We will just add new field 'provisioning_engine' to 'cluster'
> > > >         and 'cluster_template'
> > > >         objects. And profit is obvious, users can easily switch from
> one
> > > >         engine to another without
> > > >         restarting savanna service. Of course, this parameter can be
> > > >         omitted and the default value
> > > >         from the 'savanna.conf' will be applied.
> > > >
> > > >         Is this viable? What do you think?
> > > >
> > > >         Regards,
> > > >         Alexander Ignatov
> > > >
> > > >
> > > >
> > > >
> > > >         _______________________________________________
> > > >         OpenStack-dev mailing list
> > > >         OpenStack-dev at lists.openstack.org
> > > >         <mailto:OpenStack-dev at lists.openstack.org>
> > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > >
> > > >     _______________________________________________
> > > >     OpenStack-dev mailing list
> > > >     OpenStack-dev at lists.openstack.org
> > > >     <mailto:OpenStack-dev at lists.openstack.org>
> > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > OpenStack-dev mailing list
> > > > OpenStack-dev at lists.openstack.org
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> > >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/f4ec6b41/attachment-0001.html>

------------------------------

Message: 8
Date: Wed, 5 Feb 2014 14:54:56 +0000 (UTC)
From: Florent Flament <florent.flament-ext at cloudwatt.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] Hierarchicical Multitenancy Discussion
Message-ID:
        <403718698.12627364.1391612096083.JavaMail.root at cloudwatt.com>
Content-Type: text/plain; charset=utf-8

Vish:

I agree that having roles associated with projects may complicate
policy rules (although we may find ways to simplify the syntax?). It
may be a sound choice to stick to a single scope for a given token.

+1 for your quotas tree proposal. Maybe ensuring that the sum of
subprojects quotas is lower (or equal) than the parent quota will be
enough for most use cases.

So far, I don't see any issue against your hierarchical projects
proposal. IMHO, domains would not be much useful anymore.


Vinod:

I agree that you raised the same issue that I did. I needed some
clarification.

Regarding names (or IDs) that Nova uses, it would have to be "full
project names" to avoid conflicts.


Tiago, Vinod, Vish:

I agree with Tiago that having policy files spread on every node
doesn't look easy to maintain. I don't think that the service
centralizing RBAC would have to know about the services "sets of
operations". It could work by checking some tuple "(action, context)"
against a set a rules, and answering whether the action is authorized
or not.

Moreover, if the same service were to centralize both RBAC and Quotas,
then both could be checked in a row, for the provided tuple. The thing
about Quotas, is that it requires the service to track resources
usage, which can be done by the service providing RBAC, since each
action would have to be authorized (and possibly tracked) by the RBAC
engine.

This is why I would argue in favor of a unique service providing RBAC
and Quotas enforcement together.

I don't know much about Gantt, so I guess that potential candidate for
such service would be Keystone, Gantt, Ceilometer ? (which already
agregates information about resources usage), a new service?.

I have seen that some work had been started to centralize Quotas, but
abandonned:
* https://review.openstack.org/#/c/44878/
* https://review.openstack.org/#/c/40568/

There's also Identity API V3 providing (centralized?) policies
management:
* http://api.openstack.org/api-ref-identity.html#identity-v3

I think it would be worth to try to clarify/simplify/rationalize the
way RBAC/Quotas are working. Or am I missing something ?

Although, I think this might be out of scope of the initial
"Hierachical Multitenancy Discussion". Should it be moved to a new
thread?

Florent Flament


----- Original Message -----
From: "Vinod Kumar Boppanna" <vinod.kumar.boppanna at cern.ch>
To: openstack-dev at lists.openstack.org
Cc: "project-cloudman (Cloudman-high level cloud management tool project)" <project-cloudman at cern.ch>
Sent: Wednesday, February 5, 2014 1:59:48 PM
Subject: Re: [openstack-dev] Hierarchicical Multitenancy Discussion

Hi,

I am doing some development in quotas and made a blueprint of it. I am not sure where to post the link. Pardon me, if this is wrong place to post it.

https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api

I welcome any comments!!!

Thanks,
Vinod Kumar Boppanna

________________________________________
From: openstack-dev-request at lists.openstack.org [openstack-dev-request at lists.openstack.org]
Sent: 05 February 2014 13:00
To: openstack-dev at lists.openstack.org
Subject: OpenStack-dev Digest, Vol 22, Issue 11

Send OpenStack-dev mailing list submissions to
        openstack-dev at lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
        openstack-dev-request at lists.openstack.org

You can reach the person managing the list at
        openstack-dev-owner at lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-dev digest..."


Today's Topics:

   1. Re: [QA][Neutron][3rd Party Testing] Methodology for 3rd
      party (Miguel Angel)
   2. Re: [QA][Neutron][3rd Party Testing] Methodology for 3rd
      party (trinath.somanchi at freescale.com)
   3. Re: [nova][neutron] PCI pass-through SRIOV binding of ports
      (Irena Berezovsky)
   4. [neutron][ml2] Port binding information,  transactions, and
      concurrency (Robert Kukura)
   5. Re: [nova][ceilometer] ceilometer unit tests broke because of
      a nova patch (Mehdi Abaakouk)
   6. Re: [keystone][nova] Re: Hierarchicical   Multitenancy
      Discussion (Chris Behrens)
   7. Re: [Ironic] January review redux (Lucas Alvares Gomes)
   8. Re: [nova][ceilometer] ceilometer unit tests broke        because of
      a nova patch (Julien Danjou)
   9. Re: [TripleO] [Ironic] mid-cycle meetup? (Robert Collins)
  10. [rally] Proposing changes in Rally core team (Boris Pavlovic)
  11. Re: [Ironic] January review redux (Yuriy Zveryanskyy)
  12. Re: [TripleO] [Tuskar] [UX] Infrastructure Management UI -
      Icehouse scoped wireframes (Tomas Sedovic)
  13. Re: [rally] Proposing changes in Rally core team
      (Sergey Skripnick)
  14. Re: [Neutron] backporting database migrations to
      stable/havana (Thierry Carrez)
  15. Re: [keystone][nova] Re: Hierarchicical Multitenancy
      Discussion (Florent Flament)
  16. Re: [Nova] os-migrateLive not working with neutron in Havana
      (or apparently Grizzly) (John Garbutt)
  17. Re: [Ironic] January review redux (Haomeng, Wang)
  18. Re: [Openstack-docs] Conventions on naming (Thierry Carrez)
  19. Re: [rally] Proposing changes in Rally core team (Oleg Gelbukh)
  20. Re: [TripleO] [Tuskar] [UX] Infrastructure Management UI -
      Icehouse scoped wireframes (Jaromir Coufal)
  21. Re: Asynchrounous programming: replace    eventlet        with asyncio
      (victor stinner)
  22. Re: [Neutron] backporting database migrations to
      stable/havana (Ralf Haferkamp)
  23. Agenda for todays ML2 Weekly meeting
      (trinath.somanchi at freescale.com)
  24. Re: [keystone][nova] Re: Hierarchicical Multitenancy
      Discussion (Martins, Tiago)
  25. Re: [TripleO] [Tuskar] [UX] Infrastructure Management UI -
      Icehouse scoped wireframes (Tomas Sedovic)
  26. Re: [rally] Proposing changes in Rally core team (Ilya Kharin)
  27. Re: [keystone][nova] Re: Hierarchicical   Multitenancy
      Discussion (Vishvananda Ishaya)
  28. [Climate] 0.1.0 release (Dina Belova)
  29. Re: [keystone][nova] Re: Hierarchicical   Multitenancy
      Discussion (Vishvananda Ishaya)
  30. Re: [Climate] 0.1.0 release (Sergey Lukjanov)
  31. Re: [Neutron] backporting database migrations     to
      stable/havana (Ralf Haferkamp)
  32. Re: Asynchrounous programming: replace eventlet with asyncio
      (Thierry Carrez)
  33. Re: [Climate] 0.1.0 release (Oleg Gelbukh)
  34. Re: [OpenStack-Infra] [cinder][neutron][nova][3rd party
      testing] Gerrit Jenkins plugin will not fulfill requirements of
      3rd party testing (Sergey Lukjanov)
  35. Re: savann-ci, Re: [savanna] Alembic migrations and absence
      of DROP column in sqlite (Sergey Lukjanov)
  36. Re: [savanna] Specific job type for streaming mapreduce? (and
      someday pipes) (Sergey Lukjanov)


----------------------------------------------------------------------

Message: 1
Date: Wed, 5 Feb 2014 07:28:35 +0100
From: Miguel Angel <miguelangel at ajo.es>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [QA][Neutron][3rd Party Testing]
        Methodology for 3rd party
Message-ID:
        <CADSDy2j2DX2LDivWDV+pW=jMJ9anLpe5Wy6LUkRhOrogsZESHQ at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Interesting points here, I agree with Akihiro, some components
leave services, and left over settings over the system even when
shut down (I know of neutron net namespaces, .. etc..).

+1 to Akihiro proposals for a fresh-vm.


---
irc: ajo / mangelajo
Miguel Angel Ajo Pelayo
+34 636 52 25 69
skype: ajoajoajo


2014-02-05 Akihiro Motoki <motoki at da.jp.nec.com>:

> Hi,
>
> I think it is better to use a fresh VM to run tests.
> When running tempest scenario tests, there is a case
> where some resources can not be cleanup properly.
> It happens when some test fails of course.
>
> I think 10 minutes is not too long.
> It requires more than 30 minutes until gate jobs
> on openstack-ci report test results.
> 10 minutes is fast enough compared to this time.
>
> Other ways to speed up the testing are:
> - to instsall dependecy packages in advance
> - to create PyPI mirror
> - to clone required git repos in advance and just sync when testing
>  From my experience the first one will contribute most to save time.
>
> Thanks,
> Akihiro
>
> (2014/02/05 10:24), Franck Yelles wrote:
> > Hello,
> >
> > I was wondering how everyone was doing 3rd party testing at the moment
> > when it comes to the process.
> > It takes me around 10 minutes for me to do a +1 or -1.
> >
> > my flow is the following:
> > (I only use Jenkins for listening to the "feed")
> > 1) a job is triggered from Jenkins.
> > 2) a VM is booted
> > 3) the devstack repo is clone
> > 4) the patch is applied
> > 5) stack.sh is run (longest time is here)
> > 6) the test are run
> > 7) the result is posted
> > 8) the VM is destroyed
> >
> > I am looking for ways to speed up the process.
> > I was thinking of keeping the stack.sh up;  and follow this
> >
> > 1) Shutdown the affected component  (neutron, etc..)
> > 2) apply the patch
> > 3) restart the component
> > 4) run the test
> > 5) post the result
> > 6) shutdown the affected component
> > 7) remove the patch
> > 8) restart the component
> >
> > What are you thoughts ?
> > Ideally I would like to achieve a sub 3 minutes.
> >
> > Thanks,
> > Franck
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/106fc1f1/attachment-0001.html>

------------------------------

Message: 2
Date: Wed, 5 Feb 2014 06:36:29 +0000
From: "trinath.somanchi at freescale.com"
        <trinath.somanchi at freescale.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [QA][Neutron][3rd Party Testing]
        Methodology for 3rd party
Message-ID:
        <96ea29a1b49746aa97fee2f06babdc56 at BN1PR03MB153.namprd03.prod.outlook.com>

Content-Type: text/plain; charset="us-ascii"

Hi -

I'm a new bee here..

Can anyone guide me on setting up a new 3rd party Testing account.

How it is useful ?
What the machine requirements?
How testing is automated ?
How to post back +/- 1 to Jenkins ?
What packages are to be installed ?

Kindly help me understand them.

Thanks in advance
--
Trinath Somanchi - B39208
trinath.somanchi at freescale.com | extn: 4048

From: Miguel Angel [mailto:miguelangel at ajo.es]
Sent: Wednesday, February 05, 2014 11:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [QA][Neutron][3rd Party Testing] Methodology for 3rd party

Interesting points here, I agree with Akihiro, some components
leave services, and left over settings over the system even when
shut down (I know of neutron net namespaces, .. etc..).

+1 to Akihiro proposals for a fresh-vm.


---
irc: ajo / mangelajo
Miguel Angel Ajo Pelayo
+34 636 52 25 69
skype: ajoajoajo

2014-02-05 Akihiro Motoki <motoki at da.jp.nec.com<mailto:motoki at da.jp.nec.com>>:
Hi,

I think it is better to use a fresh VM to run tests.
When running tempest scenario tests, there is a case
where some resources can not be cleanup properly.
It happens when some test fails of course.

I think 10 minutes is not too long.
It requires more than 30 minutes until gate jobs
on openstack-ci report test results.
10 minutes is fast enough compared to this time.

Other ways to speed up the testing are:
- to instsall dependecy packages in advance
- to create PyPI mirror
- to clone required git repos in advance and just sync when testing
 From my experience the first one will contribute most to save time.

Thanks,
Akihiro

(2014/02/05 10:24), Franck Yelles wrote:
> Hello,
>
> I was wondering how everyone was doing 3rd party testing at the moment
> when it comes to the process.
> It takes me around 10 minutes for me to do a +1 or -1.
>
> my flow is the following:
> (I only use Jenkins for listening to the "feed")
> 1) a job is triggered from Jenkins.
> 2) a VM is booted
> 3) the devstack repo is clone
> 4) the patch is applied
> 5) stack.sh is run (longest time is here)
> 6) the test are run
> 7) the result is posted
> 8) the VM is destroyed
>
> I am looking for ways to speed up the process.
> I was thinking of keeping the stack.sh up;  and follow this
>
> 1) Shutdown the affected component  (neutron, etc..)
> 2) apply the patch
> 3) restart the component
> 4) run the test
> 5) post the result
> 6) shutdown the affected component
> 7) remove the patch
> 8) restart the component
>
> What are you thoughts ?
> Ideally I would like to achieve a sub 3 minutes.
>
> Thanks,
> Franck
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/5ce24bc8/attachment-0001.html>

------------------------------

Message: 3
Date: Wed, 5 Feb 2014 06:58:29 +0000
From: Irena Berezovsky <irenab at mellanox.com>
To: Robert Kukura <rkukura at redhat.com>, "Sandhya Dasu (sadasu)"
        <sadasu at cisco.com>, "OpenStack Development Mailing List (not for usage
        questions)" <openstack-dev at lists.openstack.org>, "Robert Li (baoli)"
        <baoli at cisco.com>, "Brian Bowen (brbowen)" <brbowen at cisco.com>
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
        binding of ports
Message-ID:
        <9D25E123B44F4A4291F4B5C13DA94E7795EA6CB8 at MTLDAG01.mtl.com>
Content-Type: text/plain; charset="us-ascii"

Please see inline my understanding

-----Original Message-----
From: Robert Kukura [mailto:rkukura at redhat.com]
Sent: Tuesday, February 04, 2014 11:57 PM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
> Hi,
>      I have a couple of questions for ML2 experts regarding support of
> SR-IOV ports.

I'll try, but I think these questions might be more about how the various SR-IOV implementations will work than about ML2 itself...

> 1. The SR-IOV ports would not be managed by ova or linuxbridge L2
> agents. So, how does a MD for SR-IOV ports bind/unbind its ports to
> the host? Will it just be a db update?

I think whether or not to use an L2 agent depends on the specific SR-IOV implementation. Some (Mellanox?) might use an L2 agent, while others
(Cisco?) might put information in binding:vif_details that lets the nova VIF driver take care of setting up the port without an L2 agent.
[IrenaB] Based on VIF_Type that MD defines, and going forward with other binding:vif_details attributes, VIFDriver should do the VIF pluging part. As for required networking configuration is required, it is usually done either by L2 Agent or external Controller, depends on MD.

>
> 2. Also, how do we handle the functionality in mech_agent.py, within
> the SR-IOV context?

My guess is that those SR-IOV MechanismDrivers that use an L2 agent would inherit the AgentMechanismDriverBase class if it provides useful functionality, but any MechanismDriver implementation is free to not use this base class if its not applicable. I'm not sure if an SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being planned, and how that would relate to AgentMechanismDriverBase.

[IrenaB] Agree with Bob, and as I stated before I think there is a need for SriovMechanismDriverBase/Mixin that provides all the generic functionality and helper methods that are common to SRIOV ports.
-Bob

>
> Thanks,
> Sandhya
>
> From: Sandhya Dasu <sadasu at cisco.com <mailto:sadasu at cisco.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>
> Date: Monday, February 3, 2014 3:14 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>, Irena Berezovsky
> <irenab at mellanox.com <mailto:irenab at mellanox.com>>, "Robert Li (baoli)"
> <baoli at cisco.com <mailto:baoli at cisco.com>>, Robert Kukura
> <rkukura at redhat.com <mailto:rkukura at redhat.com>>, "Brian Bowen
> (brbowen)" <brbowen at cisco.com <mailto:brbowen at cisco.com>>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
> extra hr of discussion today
>
> Hi,
>     Since, openstack-meeting-alt seems to be in use, baoli and myself
> are moving to openstack-meeting. Hopefully, Bob Kukura & Irena can
> join soon.
>
> Thanks,
> Sandhya
>
> From: Sandhya Dasu <sadasu at cisco.com <mailto:sadasu at cisco.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>
> Date: Monday, February 3, 2014 1:26 PM
> To: Irena Berezovsky <irenab at mellanox.com
> <mailto:irenab at mellanox.com>>, "Robert Li (baoli)" <baoli at cisco.com
> <mailto:baoli at cisco.com>>, Robert Kukura <rkukura at redhat.com
> <mailto:rkukura at redhat.com>>, "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)"
> <brbowen at cisco.com <mailto:brbowen at cisco.com>>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
> extra hr of discussion today
>
> Hi all,
>     Both openstack-meeting and openstack-meeting-alt are available
> today. Lets meet at UTC 2000 @ openstack-meeting-alt.
>
> Thanks,
> Sandhya
>
> From: Irena Berezovsky <irenab at mellanox.com
> <mailto:irenab at mellanox.com>>
> Date: Monday, February 3, 2014 12:52 AM
> To: Sandhya Dasu <sadasu at cisco.com <mailto:sadasu at cisco.com>>, "Robert
> Li (baoli)" <baoli at cisco.com <mailto:baoli at cisco.com>>, Robert Kukura
> <rkukura at redhat.com <mailto:rkukura at redhat.com>>, "OpenStack
> Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)"
> <brbowen at cisco.com <mailto:brbowen at cisco.com>>
> Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
> Jan. 30th
>
> Hi Sandhya,
>
> Can you please elaborate how do you suggest to extend the below bp for
> SRIOV Ports managed by different Mechanism Driver?
>
> I am not biased to any specific direction here, just think we need
> common layer for managing SRIOV port at neutron, since there is a
> common pass between nova and neutron.
>
>
>
> BR,
>
> Irena
>
>
>
>
>
> *From:*Sandhya Dasu (sadasu) [mailto:sadasu at cisco.com]
> *Sent:* Friday, January 31, 2014 6:46 PM
> *To:* Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack
> Development Mailing List (not for usage questions); Brian Bowen
> (brbowen)
> *Subject:* Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
> on Jan. 30th
>
>
>
> Hi Irena,
>
>       I was initially looking
> at
> https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info to take care of the extra information required to set up the SR-IOV port.
> When the scope of the BP was being decided, we had very little info
> about our own design so I didn't give any feedback about SR-IOV ports.
> But, I feel that this is the direction we should be going. Maybe we
> should target this in Juno.
>
>
>
> Introducing, */SRIOVPortProfileMixin /*would be creating yet another
> way to take care of extra port config. Let me know what you think.
>
>
>
> Thanks,
>
> Sandhya
>
>
>
> *From: *Irena Berezovsky <irenab at mellanox.com
> <mailto:irenab at mellanox.com>>
> *Date: *Thursday, January 30, 2014 4:13 PM
> *To: *"Robert Li (baoli)" <baoli at cisco.com <mailto:baoli at cisco.com>>,
> Robert Kukura <rkukura at redhat.com <mailto:rkukura at redhat.com>>,
> Sandhya Dasu <sadasu at cisco.com <mailto:sadasu at cisco.com>>, "OpenStack
> Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org
> <mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)"
> <brbowen at cisco.com <mailto:brbowen at cisco.com>>
> *Subject: *RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
> on Jan. 30th
>
>
>
> Robert,
>
> Thank you very much for the summary.
>
> Please, see inline
>
>
>
> *From:*Robert Li (baoli) [mailto:baoli at cisco.com]
> *Sent:* Thursday, January 30, 2014 10:45 PM
> *To:* Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky;
> OpenStack Development Mailing List (not for usage questions); Brian
> Bowen (brbowen)
> *Subject:* [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
> Jan. 30th
>
>
>
> Hi,
>
>
>
> We made a lot of progress today. We agreed that:
>
> -- vnic_type will be a top level attribute as binding:vnic_type
>
> -- BPs:
>
>      * Irena's
> https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
> for binding:vnic_type
>
>      * Bob to submit a BP for binding:profile in ML2. SRIOV input info
> will be encapsulated in binding:profile
>
>      * Bob to submit a BP for binding:vif_details in ML2. SRIOV output
> info will be encapsulated in binding:vif_details, which may include
> other information like security parameters. For SRIOV, vlan_id and
> profileid are candidates.
>
> -- new arguments for port-create will be implicit arguments. Future
> release may make them explicit. New argument: --binding:vnic_type
> {virtio, direct, macvtap}.
>
> I think that currently we can make do without the profileid as an
> input parameter from the user. The mechanism driver will return a
> profileid in the vif output.
>
>
>
> Please correct any misstatement in above.
>
>
>
> Issues:
>
>   -- do we need a common utils/driver for SRIOV generic parts to be
> used by individual Mechanism drivers that support SRIOV? More details
> on what would be included in this sriov utils/driver? I'm thinking
> that a candidate would be the helper functions to interpret the
> pci_slot, which is proposed as a string. Anything else in your mind?
>
> */[IrenaB] I thought on some SRIOVPortProfileMixin to handle and
> persist SRIOV port related attributes/*
>
>
>
>   -- what should mechanism drivers put in binding:vif_details and how
> nova would use this information? as far as I see it from the code, a
> VIF object is created and populated based on information provided by
> neutron (from get network and get port)
>
>
>
> Questions:
>
>   -- nova needs to work with both ML2 and non-ML2 plugins. For regular
> plugins, binding:vnic_type will not be set, I guess. Then would it be
> treated as a virtio type? And if a non-ML2 plugin wants to support
> SRIOV, would it need to  implement vnic-type, binding:profile,
> binding:vif-details for SRIOV itself?
>
> */[IrenaB] vnic_type will be added as an additional attribute to
> binding extension. For persistency it should be added in
> PortBindingMixin for non ML2. I didn't think to cover it as part of
> ML2 vnic_type bp./*
>
> */For the rest attributes, need to see what Bob plans./*
>
>
>
>  -- is a neutron agent making decision based on the binding:vif_type?
>  In that case, it makes sense for binding:vnic_type not to be exposed
> to agents.
>
> */[IrenaB] vnic_type is input parameter that will eventually cause
> certain vif_type to be sent to GenericVIFDriver and create network
> interface. Neutron agents periodically scan for attached interfaces.
> For example, OVS agent will look only for OVS interfaces, so if SRIOV
> interface is created, it won't be discovered by OVS agent./*
>
>
>
> Thanks,
>
> Robert
>




------------------------------

Message: 4
Date: Wed, 05 Feb 2014 02:16:14 -0500
From: Robert Kukura <rkukura at redhat.com>
To: OpenStack Development Mailing List
        <openstack-dev at lists.openstack.org>, Kyle Mestery
        <mestery at mestery.com>, "Rich Curran (rcurran)" <rcurran at cisco.com>,
        Sukhdev Kapur <sukhdev at aristanetworks.com>, Rohon Mathieu
        <mathieu.rohon at gmail.com>
Subject: [openstack-dev] [neutron][ml2] Port binding information,
        transactions, and concurrency
Message-ID: <52F1E53E.5040800 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1

A couple of interrelated issues with the ML2 plugin's port binding have
been discussed over the past several months in the weekly ML2 meetings.
These effect drivers being implemented for icehouse, and therefore need
to be addressed in icehouse:

* MechanismDrivers need detailed information about all binding changes,
including unbinding on port deletion
(https://bugs.launchpad.net/neutron/+bug/1276395)
* MechanismDrivers' bind_port() methods are currently called inside
transactions, but in some cases need to make remote calls to controllers
or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
* Semantics of concurrent port binding need to be defined if binding is
moved outside the triggering transaction.

I've taken the action of writing up a unified proposal for resolving
these issues, which follows...

1) An original_bound_segment property will be added to PortContext. When
the MechanismDriver update_port_precommit() and update_port_postcommit()
methods are called and a binding previously existed (whether its being
torn down or not), this property will provide access to the network
segment used by the old binding. In these same cases, the portbinding
extension attributes (such as binding:vif_type) for the old binding will
be available via the PortContext.original property. It may be helpful to
also add bound_driver and original_bound_driver properties to
PortContext that behave similarly to bound_segment and
original_bound_segment.

2) The MechanismDriver.bind_port() method will no longer be called from
within a transaction. This will allow drivers to make remote calls on
controllers or devices from within this method without holding a DB
transaction open during those calls. Drivers can manage their own
transactions within bind_port() if needed, but need to be aware that
these are independent from the transaction that triggered binding, and
concurrent changes to the port could be occurring.

3) Binding will only occur after the transaction that triggers it has
been completely processed and committed. That initial transaction will
unbind the port if necessary. Four cases for the initial transaction are
possible:

3a) In a port create operation, whether the binding:host_id is supplied
or not, all drivers' port_create_precommit() methods will be called, the
initial transaction will be committed, and all drivers'
port_create_postcommit() methods will be called. The drivers will see
this as creation of a new unbound port, with PortContext properties as
shown. If a value for binding:host_id was supplied, binding will occur
afterwards as described in 4 below.

PortContext.original: None
PortContext.original_bound_segment: None
PortContext.original_bound_driver: None
PortContext.current['binding:host_id']: supplied value or None
PortContext.current['binding:vif_type']: 'unbound'
PortContext.bound_segment: None
PortContext.bound_driver: None

3b) Similarly, in a port update operation on a previously unbound port,
all drivers' port_update_precommit() and port_update_postcommit()
methods will be called, with PortContext properies as shown. If a value
for binding:host_id was supplied, binding will occur afterwards as
described in 4 below.

PortContext.original['binding:host_id']: previous value or None
PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
PortContext.original_bound_segment: None
PortContext.original_bound_driver: None
PortContext.current['binding:host_id']: current value or None
PortContext.current['binding:vif_type']: 'unbound'
PortContext.bound_segment: None
PortContext.bound_driver: None

3c) In a port update operation on a previously bound port that does not
trigger unbinding or rebinding, all drivers' update_port_precommit() and
update_port_postcommit() methods will be called with PortContext
properties reflecting unchanged binding states as shown.

PortContext.original['binding:host_id']: previous value
PortContext.original['binding:vif_type']: previous value
PortContext.original_bound_segment: previous value
PortContext.original_bound_driver: previous value
PortContext.current['binding:host_id']: previous value
PortContext.current['binding:vif_type']: previous value
PortContext.bound_segment: previous value
PortContext.bound_driver: previous value

3d) In a the port update operation on a previously bound port that does
trigger unbinding or rebinding, all drivers' update_port_precommit() and
update_port_postcommit() methods will be called with PortContext
properties reflecting the previously bound and currently unbound binding
states as shown. If a value for binding:host_id was supplied, binding
will occur afterwards as described in 4 below.

PortContext.original['binding:host_id']: previous value
PortContext.original['binding:vif_type']: previous value
PortContext.original_bound_segment: previous value
PortContext.original_bound_driver: previous value
PortContext.current['binding:host_id']: new or current value
PortContext.current['binding:vif_type']: 'unbound'
PortContext.bound_segment: None
PortContext.bound_driver: None

4) If a port create or update operation triggers binding or rebinding,
it is attempted after the initial transaction is processed and committed
as described in 3 above. The binding process itself is just as before,
except it happens after and outside the transaction. Since binding now
occurs outside the transaction, its possible that multiple threads or
processes could concurrently attempt to bind the same port, although
this is should be a rare occurrence. Rather than trying to prevent this
with some sort of distributed lock or complicated state machine,
concurrent attempts to bind are allowed to proceed in parallel. When a
thread completes its attempt to bind (either successfully or
unsuccessfully) it then performs a second transaction to update the DB
with the result of its binding attempt. When doing so, it checks to see
if some other thread has already committed relevant changes to the port
between the two transactions. There are three possible cases:

4a) If the thread's binding attempt succeeded, and no other thread has
committed either a new binding or changes that invalidate this thread's
new binding between the two transactions, the thread commits its own
binding results, calling all drivers' update_port_precommit() and
update_port_postcommit() methods with PortContext properties reflecting
the new binding as shown. It then returns the updated port dictionary to
the caller.

PortContext.original['binding:host_id']: previous value
PortContext.original['binding:vif_type']: 'unbound'
PortContext.original_bound_segment: None
PortContext.original_bound_driver: None
PortContext.current['binding:host_id']: previous value
PortContext.current['binding:vif_type']: new value
PortContext.bound_segment: new value
PortContext.bound_driver: new value

4b) If the thread's binding attempt either succeeded or failed, but some
other thread has committed a new successful binding between the two
transactions, the thread returns a port dictionary with attributes based
on the DB state from the new transaction, including the other thread's
binding and any other port state changes. No further calls to mechanism
drivers are needed here since they are the responsibility of the other
thread that bound the port.

4c) If some other thread committed changes to the port's
binding-relevant state but has not committed a successful binding, then
this thread attempts to bind again using that updated state, repeating 4.

5) Port deletion no longer does anything special to unbind the port. All
drivers' delete_port_precommit() and delete_port_postcommit() methods
are called with PortContext properties reflecting the binding state
before deletion as shown.

PortContext.original: None
PortContext.original_bound_segment: None
PortContext.original_bound_driver: None
PortContext.current['binding:host_id']: previous value or None
PortContext.current['binding:vif_type']: previous value
PortContext.bound_segment: previous value
PortContext.bound_driver: previous value

6) In order to ensure successful bindings are created and returned
whenever possible, the get port and get ports operations also attempt to
bind the port as in 4 above when binding:host_id is available but there
is no existing successful binding in the DB.

7) We can either eliminate MechanismDriver.unbind_port(), or call it on
the previously bound driver within the transaction in 3d and 5 above. If
we do keep it, the old binding state must be consistently reflected in
the PortContext as either current or original state, TBD. Since all
drivers see unbinding as a port update where current_bound_segment is
None and original_bound_segment is not None, calling unbind_port() seems
redundant.

8) If bindings shouldn't spontaneously become invalid, maybe we can
eliminate MechanismDriver.validate_bound_port().


I've provided a lot of details, and the above may seem complicated. But
I think its actually much more consistent and predictable than the
current port binding code, and implementation should be straightforward.

-Bob



------------------------------

Message: 5
Date: Wed, 5 Feb 2014 08:50:57 +0100
From: Mehdi Abaakouk <sileht at sileht.net>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests
        broke because of a nova patch
Message-ID: <20140205075055.GA31902 at sileht.net>
Content-Type: text/plain; charset="us-ascii"

On Tue, Feb 04, 2014 at 01:11:10PM -0800, Dan Smith wrote:
> >> Whats the underlying problem here? nova notifications aren't
> >> versioned?  Nova should try to support ceilometer's use case so
> >> it sounds like there is may be a nova issue in here as well.
> >
> > Oh you're far from it.
> >
> > Long story short, the problem is that when an instance is detroyed,
> > we need to poll one last time for its CPU, IO, etc statistics to
> > send them to Ceilometer. The only way we found to do that in Nova
> > is to plug a special notification driver that intercepts the
> > deletion notification in Nova, run the pollsters, and then returns
> > to Nova execution.
>
> Doesn't this just mean that Nova needs to do an extra poll and send an
> extra notification? Using a special notification driver, catching the
> delete notification, and polling one last time seems extremely fragile
> to me. It makes assumptions about the order things happen internally
> to nova, right?
>
> What can be done to make Ceilometer less of a bolt-on? That seems like
> the thing worth spending time discussing...

We don't have to add a new notification, but we have to add some new
datas in the nova notifications.
At least for the delete instance notification to remove the ceilometer
nova notifier.

A while ago, I have registered a blueprint that explains which datas are
missing in the current nova notifications:

https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification
https://wiki.openstack.org/wiki/Ceilometer/blueprints/remove-ceilometer-nova-notifier

Regards,
--
Mehdi Abaakouk
mail: sileht at sileht.net
irc: sileht
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/7c8b86a1/attachment-0001.pgp>

------------------------------

Message: 6
Date: Wed, 5 Feb 2014 00:27:13 -0800
From: Chris Behrens <cbehrens at codestud.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical
        Multitenancy Discussion
Message-ID: <53D9EA30-F921-4407-B951-8111FF6C814B at codestud.com>
Content-Type: text/plain; charset=windows-1252


Hi Vish,

I?m jumping in slightly late on this, but I also have an interest in this. I?m going to preface this by saying that I have not read this whole thread yet, so I apologize if I repeat things, say anything that is addressed by previous posts, or doesn?t jive with what you?re looking for. :) But what you describe below sounds like exactly a use case I?d come up with.

Essentially I want another level above project_id. Depending on the exact use case, you could name it ?wholesale_id? or ?reseller_id?...and yeah, ?org_id? fits in with your example. :) I think that I had decided I?d call it ?domain? to be more generic, especially after seeing keystone had a domain concept.

Your idea below (prefixing the project_id) is exactly one way I thought of doing this to be least intrusive. I, however, thought that this would not be efficient. So, I was thinking about proposing that we add ?domain? to all of our models. But that limits your hierarchy and I don?t necessarily like that. :)  So I think that if the queries are truly indexed as you say below, you have a pretty good approach. The one issue that comes into mind is that if there?s any chance of collision. For example, if project ids (or orgs) could contain a ?.?, then ?.? as a delimiter won?t work.

My requirements could be summed up pretty well by thinking of this as ?virtual clouds within a cloud?. Deploy a single cloud infrastructure that could look like many multiple clouds. ?domain? would be the key into each different virtual cloud. Accessing one virtual cloud doesn?t reveal any details about another virtual cloud.

What this means is:

1) domain ?a? cannot see instances (or resources in general) in domain ?b?. It doesn?t matter if domain ?a? and domain ?b? share the same tenant ID. If you act with the API on behalf of domain ?a?, you cannot see your instances in domain ?b?.
2) Flavors per domain. domain ?a? can have different flavors than domain ?b?.
3) Images per domain. domain ?a? could see different images than domain ?b?.
4) Quotas and quota limits per domain. your instances in domain ?a? don?t count against quotas in domain ?b?.
5) Go as far as using different config values depending on what domain you?re using. This one is fun. :)

etc.

I?m not sure if you were looking to go that far or not. :) But I think that our ideas are close enough, if not exact, that we can achieve both of our goals with the same implementation.

I?d love to be involved with this. I am not sure that I currently have the time to help with implementation, however.

- Chris



On Feb 3, 2014, at 1:58 PM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:

> Hello Again!
>
> At the meeting last week we discussed some options around getting true multitenancy in nova. The use case that we are trying to support can be described as follows:
>
> "Martha, the owner of ProductionIT provides it services to multiple Enterprise clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and Development teams with many users. Joe needs the ability create users, projects, and quotas, as well as the ability to list and delete resources across WidgetMaster. Martha needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, projects, and objects across the entire system; and set quotas for the client companies as a whole. She also needs to ensure that Joe can't see or mess with anything owned by Sam."
>
> As per the plan I outlined in the meeting I have implemented a Proof-of-Concept that would allow me to see what changes were required in nova to get scoped tenancy working. I used a simple approach of faking out heirarchy by prepending the id of the larger scope to the id of the smaller scope. Keystone uses uuids internally, but for ease of explanation I will pretend like it is using the name. I think we can all agree that ?orga.projecta? is more readable than ?b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8?.
>
> The code basically creates the following five projects:
>
> orga
> orga.projecta
> orga.projectb
> orgb
> orgb.projecta
>
> I then modified nova to replace everywhere where it searches or limits policy by project_id to do a prefix match. This means that someone using project ?orga? should be able to list/delete instances in orga, orga.projecta, and orga.projectb.
>
> You can find the code here:
>
>  https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
>  https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f
>
> Keeping in mind that this is a prototype, but I?m hoping to come to some kind of consensus as to whether this is a reasonable approach. I?ve compiled a list of pros and cons.
>
> Pros:
>
>  * Very easy to understand
>  * Minimal changes to nova
>  * Good performance in db (prefix matching uses indexes)
>  * Could be extended to cover more complex scenarios like multiple owners or multiple scopes
>
> Cons:
>
>  * Nova has no map of the hierarchy
>  * Moving projects would require updates to ownership inside of nova
>  * Complex scenarios involving delegation of roles may be a bad fit
>  * Database upgrade to hierarchy could be tricky
>
> If this seems like a reasonable set of tradeoffs, there are a few things that need to be done inside of nova bring this to a complete solution:
>
>  * Prefix matching needs to go into oslo.policy
>  * Should the tenant_id returned by the api reflect the full ?orga.projecta?, or just the child ?projecta? or match the scope: i.e. the first if you are authenticated to orga and the second if you are authenticated to the project?
>  * Possible migrations for existing project_id fields
>  * Use a different field for passing ownership scope instead of overloading project_id
>  * Figure out how nested quotas should work
>  * Look for other bugs relating to scoping
>
> Also, we need to decide how keystone should construct and pass this information to the services. The obvious case that could be supported today would be to allow a single level of hierarchy using domains. For example, if domains are active, keystone could pass domain.project_id for ownership_scope. This could be controversial because potentially domains are just for grouping users and shouldn?t be applied to projects.
>
> I think the real value of this approach would be to allow nested projects with role inheritance. When keystone is creating the token, it could walk the tree of parent projects, construct the set of roles, and construct the ownership_scope as it walks to the root of the tree.
>
> Finally, similar fixes will need to be made in the other projects to bring this to a complete solution.
>
> Please feel free to respond with any input, and we will be having another Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.
>
> Vish
>
> On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
>
>> Hi Everyone,
>>
>> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>>
>> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>>
>> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>>
>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>>
>> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>>
>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>>
>> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>>
>> Vish
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




------------------------------

Message: 7
Date: Wed, 5 Feb 2014 09:32:03 +0000
From: Lucas Alvares Gomes <lucasagomes at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] January review redux
Message-ID:
        <CAB1EZBq557d8tR6i8hPb7VXpf0+SBQtOapqx1tQTbSZ4gmdP2g at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

>
> So, I'd like to nominate the following two additions to the ironic-core
> team:
>
> Max Lobur
>
> https://review.openstack.org/#/q/reviewer:mlobur%2540mirantis.com+project:openstack/ironic,n,z
>
> Roman Prykhodchenko
>
> https://review.openstack.org/#/q/reviewer:rprikhodchenko%2540mirantis.com+project:openstack/ironic,n,z
>

Awesome people! +1 for both :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/4457bcb4/attachment-0001.html>

------------------------------

Message: 8
Date: Wed, 05 Feb 2014 10:35:38 +0100
From: Julien Danjou <julien at danjou.info>
To: Joe Gordon <joe.gordon0 at gmail.com>
Cc: OpenStack Development Mailing List
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests
        broke   because of a nova patch
Message-ID: <m2zjm5c36d.fsf at danjou.info>
Content-Type: text/plain; charset="utf-8"

On Tue, Feb 04 2014, Joe Gordon wrote:

> Ceilometer running a plugin in nova is bad (for all the reasons
> previously discussed),

Well, I partially disagree. Are you saying that nobody is allowed to run
a plugin in Nova? So what are these plugins in the first place?
Or if you're saying that Ceilometer cannot have plugins in Nova, I would
like to know why.

What is wrong, I agree, is that we have to use and mock nova internals
to test our plugins. OTOH anyone writing plugin for Nova will have the
same issue. To which extend this is a problem with the plugin system,
I'll let everybody thing about it. :)

> So what can nova do to help this?  It sounds like you have a valid use
> case that nova should support without requiring a plugin.

We just need the possibility to run some code before an instance is
deleted, in a synchronous manner ? i.e. our code needs to be fully
executed before Nova can actually destroyes the VM.

--
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/90b0fdf0/attachment-0001.pgp>

------------------------------

Message: 9
Date: Wed, 5 Feb 2014 22:39:05 +1300
From: Robert Collins <robertc at robertcollins.net>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [TripleO] [Ironic] mid-cycle meetup?
Message-ID:
        <CAJ3HoZ0TiLMa98XtHoOF4vvX9AffZjTODBmWDmAmmpn72GoDug at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On 5 February 2014 11:02, Mark Washenberger
<mark.washenberger at markwash.net> wrote:
> I'd like to attend as well, since it is close for me and some upcoming
> Glance efforts might be relevant. But I'm definitely more of a "chicken"
> than a "pig" for this gathering so let me know if that kind of participation
> is not really desired.
>
> [1] http://en.wikipedia.org/wiki/The_Chicken_and_the_Pig

More the merrier, As for the TripleO mid-cycle thing, Cody and I are
coordinating so please let him and I know if you are coming.

Be warned, we *will* be drawing on whiteboards things about glance and
thousands-of-deploys-at-once...

-Rob

--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



------------------------------

Message: 10
Date: Wed, 5 Feb 2014 14:06:07 +0400
From: Boris Pavlovic <bpavlovic at mirantis.com>
To: OpenStack Development Mailing List
        <openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [rally] Proposing changes in Rally core team
Message-ID:
        <CAD85om2JDnKom7gdY9d7m5dHncGQ3AD1p2j54VU8k=6CdvyS7A at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hi stackers,

I would like to:

1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews
(and always testing patches=) ):
http://stackalytics.com/report/reviews/rally/30

2) Remove Alexei from core team, because unfortunately he is not able to
work on Rally at this moment. Thank you Alexei for all work that you have
done.


Thoughts?


Best regards,
Boris Pavlovic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/2657d477/attachment-0001.html>

------------------------------

Message: 11
Date: Wed, 05 Feb 2014 12:08:32 +0200
From: Yuriy Zveryanskyy <yzveryanskyy at mirantis.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] January review redux
Message-ID: <52F20DA0.3050809 at mirantis.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 02/04/2014 09:42 PM, Devananda van der Veen wrote:

So, I'd like to nominate the following two additions to the ironic-core
team:

Max Lobur
https://review.openstack.org/#/q/reviewer:mlobur%2540mirantis.com+project:openstack/ironic,n,z

Roman Prykhodchenko
https://review.openstack.org/#/q/reviewer:rprikhodchenko%2540mirantis.com+project:openstack/ironic,n,z


+1 for both




------------------------------

Message: 12
Date: Wed, 05 Feb 2014 11:19:17 +0100
From: Tomas Sedovic <tsedovic at redhat.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure
        Management UI - Icehouse scoped wireframes
Message-ID: <52F21025.10500 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 05/02/14 03:58, Jaromir Coufal wrote:
> Hi to everybody,
>
> based on the feedback from last week [0] I incorporated changes in the
> wireframes so that we keep them up to date with latest decisions:
>
> http://people.redhat.com/~jcoufal/openstack/tripleo/2014-02-05_tripleo-ui-icehouse.pdf
>
>
> Changes:
> * Smaller layout change in Nodes Registration (no rush for update)
> * Unifying views for 'deploying' and 'deployed' states of the page for
> deployment detail
> * Improved workflow for associating node profiles with roles
>     - showing final state of MVP
>     - first iteration contains only last row (no node definition link)

Hey Jarda,

Looking good. I've got two questions:

1. Are we doing node tags (page 4) for the first iteration? Where are
they going to live?

2. There are multiple node profiles per role on pages 11, 12, 17. Is
that just an oversight or do you intend on keeping those in? I though
the consensus was to do 1 node profile per deployment role.

Thanks,
Tomas


>
> -- Jarda
>
> [0] https://www.youtube.com/watch?v=y2fv6vebFhM
>
>
> On 2014/16/01 01:50, Jaromir Coufal wrote:
>> Hi folks,
>>
>> thanks everybody for feedback. Based on that I updated wireframes and
>> tried to provide a minimum scope for Icehouse timeframe.
>>
>> http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf
>>
>>
>>
>> Hopefully we are able to deliver described set of features. But if you
>> find something what is missing which is critical for the first release
>> (or that we are implementing a feature which should not have such high
>> priority), please speak up now.
>>
>> The wireframes are very close to implementation. In time, there will
>> appear more views and we will see if we can get them in as well.
>>
>> Thanks all for participation
>> -- Jarda
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




------------------------------

Message: 13
Date: Wed, 05 Feb 2014 12:22:56 +0200
From: "Sergey Skripnick" <sskripnick at mirantis.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [rally] Proposing changes in Rally core
        team
Message-ID: <op.xas0gidu2lol4w at ep>
Content-Type: text/plain; charset="utf-8"; Format="flowed";
        DelSp="yes"


+1 for Hugh, but IMO no need to rush with Alexei's removal

> Hi stackers,
> I would like to:
>
> 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good
> reviews (and always testing patches=) ):
> http://stackalytics.com/report/reviews/rally/30
>
> 2) Remove Alexei from core team, because unfortunately he is not able to
> work on Rally at this moment. Thank you Alexei for all work that you
> have >done.
>
> Thoughts?
>
> Best regards,
> Boris Pavlovic

--
Regards,
Sergey Skripnick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/46d1a9bd/attachment-0001.html>

------------------------------

Message: 14
Date: Wed, 05 Feb 2014 11:31:55 +0100
From: Thierry Carrez <thierry at openstack.org>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] backporting database migrations
        to stable/havana
Message-ID: <52F2131B.6080901 at openstack.org>
Content-Type: text/plain; charset=ISO-8859-1

Ralf Haferkamp wrote:
> I am currently trying to backport the fix for
> https://launchpad.net/bugs/1254246 to stable/havana. The current state of that
> is here: https://review.openstack.org/#/c/68929/
>
> However, the fix requires a database migration to be applied (to add a unique
> constraint to the agents table). And the current fix linked above will AFAIK
> break havana->icehouse migrations. So I wonder what would be the correct way to
> do backport database migrations in neutron using alembic? Is there even a
> correct way, or are backports of database migrations a no go?

FWIW our StableBranch policy[1] generally forbids DB schema changes in
stable branches.

[1] https://wiki.openstack.org/wiki/StableBranch

--
Thierry Carrez (ttx)



------------------------------

Message: 15
Date: Wed, 5 Feb 2014 10:38:05 +0000 (UTC)
From: Florent Flament <florent.flament-ext at cloudwatt.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical
        Multitenancy    Discussion
Message-ID:
        <502188337.12431829.1391596685051.JavaMail.root at cloudwatt.com>
Content-Type: text/plain; charset=utf-8

Hi Vish,

You're approach looks very interesting. I especially like the idea of 'walking the tree of parent projects, to construct the set of roles'.

Here are some issues that came to my mind:


Regarding policy rules enforcement:

Considering the following projects:
* orga
* orga.projecta
* orga.projectb

Let's assume that Joe has the following roles:
* `Member` of `orga`
* `admin` of `orga.projectb`

Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some user on `orga.projectb` (which rights he has). He would like to be able to do all of this with the same token (scoped on project `orga`?).

For this scenario to be working, we would need to be able to store multiple roles (a tree of roles?) in the token, so that services would know which role is granted to the user on which project.

In a first time, I guess we could stay with the roles scoped to a unique project. Joe would be able to do what he wants, by getting a first token on `orga` or `orga.projecta` with a `Member` role, then a second token on `orga.projectb` with the `admin` role.


Considering quotas enforcement:

Let's say we wants set the following limits:

* `orga` : max 10 VMs
* ? orga.projecta` : max 8 VMs
* `orga.projectb` : max 8 VMs

The idea would be that the `admin` of `orga` wishes to allow 8 VMs to projects ?`orga.projecta` or `orga.projectb`, but doesn't care how these VMs are spread. Although he wishes to keep 2 VMs in `orga` for himself.

Then to be able to enforce these quotas, Nova (and all other services) would have to keep track of the tree of quotas, and update the appropriate nodes.


By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, ...) would just have to ask this centralized access management service whether an action is authorized for a given token?

Florent Flament



----- Original Message -----
From: "Vishvananda Ishaya" <vishvananda at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Monday, February 3, 2014 10:58:28 PM
Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy       Discussion

Hello Again!

At the meeting last week we discussed some options around getting true multitenancy in nova. The use case that we are trying to support can be described as follows:

"Martha, the owner of ProductionIT provides it services to multiple Enterprise clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and Development teams with many users. Joe needs the ability create users, projects, and quotas, as well as the ability to list and delete resources across WidgetMaster. Martha needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, projects, and objects across the entire system; and set quotas for the client companies as a whole. She also needs to ensure that Joe can't see or mess with anything owned by Sam."

As per the plan I outlined in the meeting I have implemented a Proof-of-Concept that would allow me to see what changes were required in nova to get scoped tenancy working. I used a simple approach of faking out heirarchy by prepending the id of the larger scope to the id of the smaller scope. Keystone uses uuids internally, but for ease of explanation I will pretend like it is using the name. I think we can all agree that ?orga.projecta? is more readable than ?b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8?.

The code basically creates the following five projects:

orga
orga.projecta
orga.projectb
orgb
orgb.projecta

I then modified nova to replace everywhere where it searches or limits policy by project_id to do a prefix match. This means that someone using project ?orga? should be able to list/delete instances in orga, orga.projecta, and orga.projectb.

You can find the code here:

  https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
  https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f

Keeping in mind that this is a prototype, but I?m hoping to come to some kind of consensus as to whether this is a reasonable approach. I?ve compiled a list of pros and cons.

Pros:

  * Very easy to understand
  * Minimal changes to nova
  * Good performance in db (prefix matching uses indexes)
  * Could be extended to cover more complex scenarios like multiple owners or multiple scopes

Cons:

  * Nova has no map of the hierarchy
  * Moving projects would require updates to ownership inside of nova
  * Complex scenarios involving delegation of roles may be a bad fit
  * Database upgrade to hierarchy could be tricky

If this seems like a reasonable set of tradeoffs, there are a few things that need to be done inside of nova bring this to a complete solution:

  * Prefix matching needs to go into oslo.policy
  * Should the tenant_id returned by the api reflect the full ?orga.projecta?, or just the child ?projecta? or match the scope: i.e. the first if you are authenticated to orga and the second if you are authenticated to the project?
  * Possible migrations for existing project_id fields
  * Use a different field for passing ownership scope instead of overloading project_id
  * Figure out how nested quotas should work
  * Look for other bugs relating to scoping

Also, we need to decide how keystone should construct and pass this information to the services. The obvious case that could be supported today would be to allow a single level of hierarchy using domains. For example, if domains are active, keystone could pass domain.project_id for ownership_scope. This could be controversial because potentially domains are just for grouping users and shouldn?t be applied to projects.

I think the real value of this approach would be to allow nested projects with role inheritance. When keystone is creating the token, it could walk the tree of parent projects, construct the set of roles, and construct the ownership_scope as it walks to the root of the tree.

Finally, similar fixes will need to be made in the other projects to bring this to a complete solution.

Please feel free to respond with any input, and we will be having another Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.

Vish

On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:

> Hi Everyone,
>
> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>
> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>
> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>
> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>
> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>
> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>
> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>
> Vish


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 16
Date: Wed, 5 Feb 2014 10:42:57 +0000
From: John Garbutt <john at johngarbutt.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Nova] os-migrateLive not working with
        neutron in Havana (or apparently Grizzly)
Message-ID:
        <CABib2_pUVquRoXJQCax4jQyeu5O_t9go-yEJX7JELbX2o2ecpw at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On 4 February 2014 19:16, Jonathan Proulx <jon at jonproulx.com> wrote:
> HI all,
>
> Trying to get a little love on bug https://bugs.launchpad.net/nova/+bug/1227836
>
> Short version is the instance migrates, but there's an RPC time out
> that keeps nova thinking it's still on the old node mid-migration.
> Informal survey of operators seems to suggest this always happens when
> using neutron networking and never when using nova-networking (for
> small values of always and never)
>
> Feels like I could kludge in a longer timeout somewhere and it would
> work for now, so I'm sifting through unfamiliar code trying to find
> that and hoping someone here just knows where it is and can make my
> week a whole lot better by pointing it out.

Seems like it is this call that times out:
https://github.com/openstack/nova/blob/master/nova/conductor/rpcapi.py#L428
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4283

And because there is no wrapper on this manager call method, it
remains in the "Migrating" task state:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4192

> Better less kludgy solutions also welcomed, but I need a kernel update
> on all my compute nodes so quick and dirty is all I need for right
> now.

I have some draft patches for a longer term fix as part of this:
https://blueprints.launchpad.net/nova/+spec/live-migration-to-conductor

In my current patches, I don't remove all the call operations, but
that seems like a good eventual goal.

Basic idea, is imagine the current flow is:
* source compute node calls destination
* source compute node calls conductor to do stuff
* source compute node completes rest of work

Possible new flow, removing all calls:
* conductor casts to destination
* destination casts to conductor
* conductor does what it needs to do
* conductor casts to source
* source casts to conductor
* conductor finishes off
* maybe have a periodic task to spot when we get stuck waiting (to
replace RPC timeout)

John



------------------------------

Message: 17
Date: Wed, 5 Feb 2014 18:43:45 +0800
From: "Haomeng, Wang" <wanghaomeng at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Ironic] January review redux
Message-ID:
        <CANXYZqX6uSPLwAg4siPEZiZnEoyEvr0ZZBj4Fj3Mb8e67gMYmA at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

+1 for both:)


On Wed, Feb 5, 2014 at 6:08 PM, Yuriy Zveryanskyy
<yzveryanskyy at mirantis.com> wrote:
> On 02/04/2014 09:42 PM, Devananda van der Veen wrote:
>
> So, I'd like to nominate the following two additions to the ironic-core
> team:
>
> Max Lobur
> https://review.openstack.org/#/q/reviewer:mlobur%2540mirantis.com+project:openstack/ironic,n,z
>
> Roman Prykhodchenko
> https://review.openstack.org/#/q/reviewer:rprikhodchenko%2540mirantis.com+project:openstack/ironic,n,z
>
>
> +1 for both
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



------------------------------

Message: 18
Date: Wed, 05 Feb 2014 11:52:51 +0100
From: Thierry Carrez <thierry at openstack.org>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
Message-ID: <52F21803.2010302 at openstack.org>
Content-Type: text/plain; charset=UTF-8

Steve Gordon wrote:
>> From: "Anne Gentle" <anne.gentle at rackspace.com>
>> Based on today's Technical Committee meeting and conversations with the
>> OpenStack board members, I need to change our Conventions for service names
>> at
>> https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
>> .
>>
>> Previously we have indicated that Ceilometer could be named OpenStack
>> Telemetry and Heat could be named OpenStack Orchestration. That's not the
>> case, and we need to change those names.
>>
>> To quote the TC meeting, ceilometer and heat are "other modules" (second
>> sentence from 4.1 in
>> http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
>> distributed with the Core OpenStack Project.
>>
>> Here's what I intend to change the wiki page to:
>>  Here's the list of project and module names and their official names and
>> capitalization:
>>
>> Ceilometer module
>> Cinder: OpenStack Block Storage
>> Glance: OpenStack Image Service
>> Heat module
>> Horizon: OpenStack dashboard
>> Keystone: OpenStack Identity Service
>> Neutron: OpenStack Networking
>> Nova: OpenStack Compute
>> Swift: OpenStack Object Storage

Small correction. The TC had not indicated that Ceilometer could be
named "OpenStack Telemetry" and Heat could be named "OpenStack
Orchestration". We formally asked[1] the board to allow (or disallow)
that naming (or more precisely, that use of the trademark).

[1]
https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names

We haven't got a formal and clear answer from the board on that request
yet. I suspect they are waiting for progress on DefCore before deciding.

If you need an answer *now* (and I suspect you do), it might make sense
to ask foundation staff/lawyers about using those OpenStack names with
the current state of the bylaws and trademark usage rules, rather than
the hypothetical future state under discussion.

--
Thierry Carrez (ttx)



------------------------------

Message: 19
Date: Wed, 5 Feb 2014 14:56:20 +0400
From: Oleg Gelbukh <ogelbukh at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [rally] Proposing changes in Rally core
        team
Message-ID:
        <CAFkLEwrrbyR-YTLLaWa+J0mEh4wPBEsgFuOV3MnwKpQQYgGOPw at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

+1 for Hugh, he's doing excellent job moving the project forward.

--
Best regards,
Oleg Gelbukh


On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick <sskripnick at mirantis.com>wrote:

>
> +1 for Hugh, but IMO no need to rush with Alexei's removal
>
> Hi stackers,
>
> I would like to:
>
> 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews
> (and always testing patches=) ):
> http://stackalytics.com/report/reviews/rally/30
>
> 2) Remove Alexei from core team, because unfortunately he is not able to
> work on Rally at this moment. Thank you Alexei for all work that you have
> done.
>
>
> Thoughts?
>
>
> Best regards,
> Boris Pavlovic
>
>
> --
> Regards,
> Sergey Skripnick
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/e28d8692/attachment-0001.html>

------------------------------

Message: 20
Date: Wed, 05 Feb 2014 11:59:09 +0100
From: Jaromir Coufal <jcoufal at redhat.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure
        Management UI - Icehouse scoped wireframes
Message-ID: <52F2197D.8060107 at redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed

Hi Tomas,

thanks for the questions, I am replying inline.

On 2014/05/02 11:19, Tomas Sedovic wrote:
> On 05/02/14 03:58, Jaromir Coufal wrote:
>> Hi to everybody,
>>
>> based on the feedback from last week [0] I incorporated changes in the
>> wireframes so that we keep them up to date with latest decisions:
>>
>> http://people.redhat.com/~jcoufal/openstack/tripleo/2014-02-05_tripleo-ui-icehouse.pdf
>>
>> Changes:
>> * Smaller layout change in Nodes Registration (no rush for update)
>> * Unifying views for 'deploying' and 'deployed' states of the page for
>> deployment detail
>> * Improved workflow for associating node profiles with roles
>>     - showing final state of MVP
>>     - first iteration contains only last row (no node definition link)
>
> Hey Jarda,
>
> Looking good. I've got two questions:
>
> 1. Are we doing node tags (page 4) for the first iteration? Where are
> they going to live?
Yes, it's very easy to do, already part of Ironic.

> 2. There are multiple node profiles per role on pages 11, 12, 17. Is
> that just an oversight or do you intend on keeping those in? I though
> the consensus was to do 1 node profile per deployment role.
I tried to avoid the confusion by the comment:
'- showing final state of MVP
  - first iteration contains only last row (no node definition link)'

Maybe I should be more clear. By last row I meant that in the first
iteration, the form will contain only one row with dropdown to select
only one flavor per role.

I intend to keep multiple roles for Icehouse scope. We will see if we
can get there in time, I am hoping for 'yes'. But I am absolutely
aligned with the consensus that we are starting only one node profile
per role.

-- Jarda



------------------------------

Message: 21
Date: Wed, 5 Feb 2014 12:00:11 +0100 (CET)
From: victor stinner <victor.stinner at enovance.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] Asynchrounous programming: replace
        eventlet        with asyncio
Message-ID:
        <913450666.977089.1391598011697.JavaMail.zimbra at enovance.com>
Content-Type: text/plain; charset=utf-8

Hi,

Chris Behrens wrote:
> Interesting thread. I have been working on a side project that is a
> gevent/eventlet replacement [1] that focuses on thread-safety and
> performance. This came about because of an outstanding bug we have with
> eventlet not being Thread safe. (We cannot safely enable thread pooling for
> DB calls so that they will not block.)

There are DB drivers compatible with asyncio: PostgreSQL, MongoDB, Redis and memcached.

There is also a driver for ZeroMQ which can be used in Oslo Messaging to have a more efficient (asynchronous) driver.

There also many event loops for: gevent (geventreactor, gevent3), greenlet, libuv, GLib and Tornado.

See the full list:
http://code.google.com/p/tulip/wiki/ThirdParty

Victor



------------------------------

Message: 22
Date: Wed, 5 Feb 2014 12:05:20 +0100
From: Ralf Haferkamp <rhafer at suse.de>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] backporting database migrations
        to stable/havana
Message-ID: <20140205110520.GB4724 at suse.de>
Content-Type: text/plain; charset=us-ascii

On Wed, Feb 05, 2014 at 11:31:55AM +0100, Thierry Carrez wrote:
> Ralf Haferkamp wrote:
> > I am currently trying to backport the fix for
> > https://launchpad.net/bugs/1254246 to stable/havana. The current state of that
> > is here: https://review.openstack.org/#/c/68929/
> >
> > However, the fix requires a database migration to be applied (to add a unique
> > constraint to the agents table). And the current fix linked above will AFAIK
> > break havana->icehouse migrations. So I wonder what would be the correct way to
> > do backport database migrations in neutron using alembic? Is there even a
> > correct way, or are backports of database migrations a no go?
>
> FWIW our StableBranch policy[1] generally forbids DB schema changes in
> stable branches.
Hm, I must have overlooked that when reading through the document recently.
Thanks for clarifying I guess I have to find another way to workaround the
above mentioned bug then.

Though it seems there can be exceptions from that rule. At least nova adds a
set of blank migrations (for sqlalchemy in nova's case) at the beginning of a
new development cylce (at least since havana) to be able to backport migrations
to stable. (It seems though, that no backport ever happened for nova).

> [1] https://wiki.openstack.org/wiki/StableBranch
--
Ralf



------------------------------

Message: 23
Date: Wed, 5 Feb 2014 11:06:42 +0000
From: "trinath.somanchi at freescale.com"
        <trinath.somanchi at freescale.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Cc: "Kyle Mestery \(kmestery\)" <kmestery at cisco.com>
Subject: [openstack-dev] Agenda for todays ML2 Weekly meeting
Message-ID:
        <b2b47bd43c8746b594458ac9e702ead5 at BN1PR03MB153.namprd03.prod.outlook.com>

Content-Type: text/plain; charset="us-ascii"

Hi-

Kindly share me the agenda for today weekly meeting on Neutron/ML2.


Best Regards,
--
Trinath Somanchi - B39208
trinath.somanchi at freescale.com | extn: 4048

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/7df90d8c/attachment-0001.html>

------------------------------

Message: 24
Date: Wed, 5 Feb 2014 11:08:06 +0000
From: "Martins, Tiago" <tiago.martins at hp.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical
        Multitenancy    Discussion
Message-ID:
        <C93FAF88623B4549AD7AAD7C515DDFD4163FBF0D at G2W2527.americas.hpqcorp.net>

Content-Type: text/plain; charset="utf-8"

" By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, ...) would just have to ask this centralized access management service whether an action is authorized for a given token?"

I agree on centralize RBAC, this is confusing, with a lot of files to manage and each service with some slightly different implementation on how to enforce policy. I think keystone is a good place for it, since the sql token is validated before every operation. Maybe it could even have its own DSL.
Quotas should have their own service, there are code and tables replicated all across OpenStack and that is not good, it forces quotas to be simple when they need to solve complex use cases.

Tiago Martins

-----Original Message-----
From: Florent Flament [mailto:florent.flament-ext at cloudwatt.com]
Sent: quarta-feira, 5 de fevereiro de 2014 08:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

Hi Vish,

You're approach looks very interesting. I especially like the idea of 'walking the tree of parent projects, to construct the set of roles'.

Here are some issues that came to my mind:


Regarding policy rules enforcement:

Considering the following projects:
* orga
* orga.projecta
* orga.projectb

Let's assume that Joe has the following roles:
* `Member` of `orga`
* `admin` of `orga.projectb`

Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some user on `orga.projectb` (which rights he has). He would like to be able to do all of this with the same token (scoped on project `orga`?).

For this scenario to be working, we would need to be able to store multiple roles (a tree of roles?) in the token, so that services would know which role is granted to the user on which project.

In a first time, I guess we could stay with the roles scoped to a unique project. Joe would be able to do what he wants, by getting a first token on `orga` or `orga.projecta` with a `Member` role, then a second token on `orga.projectb` with the `admin` role.


Considering quotas enforcement:

Let's say we wants set the following limits:

* `orga` : max 10 VMs
* ? orga.projecta` : max 8 VMs
* `orga.projectb` : max 8 VMs

The idea would be that the `admin` of `orga` wishes to allow 8 VMs to projects ?`orga.projecta` or `orga.projectb`, but doesn't care how these VMs are spread. Although he wishes to keep 2 VMs in `orga` for himself.

Then to be able to enforce these quotas, Nova (and all other services) would have to keep track of the tree of quotas, and update the appropriate nodes.


By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, ...) would just have to ask this centralized access management service whether an action is authorized for a given token?

Florent Flament



----- Original Message -----
From: "Vishvananda Ishaya" <vishvananda at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
Sent: Monday, February 3, 2014 10:58:28 PM
Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy       Discussion

Hello Again!

At the meeting last week we discussed some options around getting true multitenancy in nova. The use case that we are trying to support can be described as follows:

"Martha, the owner of ProductionIT provides it services to multiple Enterprise clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and Development teams with many users. Joe needs the ability create users, projects, and quotas, as well as the ability to list and delete resources across WidgetMaster. Martha needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, projects, and objects across the entire system; and set quotas for the client companies as a whole. She also needs to ensure that Joe can't see or mess with anything owned by Sam."

As per the plan I outlined in the meeting I have implemented a Proof-of-Concept that would allow me to see what changes were required in nova to get scoped tenancy working. I used a simple approach of faking out heirarchy by prepending the id of the larger scope to the id of the smaller scope. Keystone uses uuids internally, but for ease of explanation I will pretend like it is using the name. I think we can all agree that ?orga.projecta? is more readable than ?b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8?.

The code basically creates the following five projects:

orga
orga.projecta
orga.projectb
orgb
orgb.projecta

I then modified nova to replace everywhere where it searches or limits policy by project_id to do a prefix match. This means that someone using project ?orga? should be able to list/delete instances in orga, orga.projecta, and orga.projectb.

You can find the code here:

  https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
  https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f

Keeping in mind that this is a prototype, but I?m hoping to come to some kind of consensus as to whether this is a reasonable approach. I?ve compiled a list of pros and cons.

Pros:

  * Very easy to understand
  * Minimal changes to nova
  * Good performance in db (prefix matching uses indexes)
  * Could be extended to cover more complex scenarios like multiple owners or multiple scopes

Cons:

  * Nova has no map of the hierarchy
  * Moving projects would require updates to ownership inside of nova
  * Complex scenarios involving delegation of roles may be a bad fit
  * Database upgrade to hierarchy could be tricky

If this seems like a reasonable set of tradeoffs, there are a few things that need to be done inside of nova bring this to a complete solution:

  * Prefix matching needs to go into oslo.policy
  * Should the tenant_id returned by the api reflect the full ?orga.projecta?, or just the child ?projecta? or match the scope: i.e. the first if you are authenticated to orga and the second if you are authenticated to the project?
  * Possible migrations for existing project_id fields
  * Use a different field for passing ownership scope instead of overloading project_id
  * Figure out how nested quotas should work
  * Look for other bugs relating to scoping

Also, we need to decide how keystone should construct and pass this information to the services. The obvious case that could be supported today would be to allow a single level of hierarchy using domains. For example, if domains are active, keystone could pass domain.project_id for ownership_scope. This could be controversial because potentially domains are just for grouping users and shouldn?t be applied to projects.

I think the real value of this approach would be to allow nested projects with role inheritance. When keystone is creating the token, it could walk the tree of parent projects, construct the set of roles, and construct the ownership_scope as it walks to the root of the tree.

Finally, similar fixes will need to be made in the other projects to bring this to a complete solution.

Please feel free to respond with any input, and we will be having another Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.

Vish

On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:

> Hi Everyone,
>
> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>
> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>
> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>
> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>
> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>
> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>
> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>
> Vish


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

------------------------------

Message: 25
Date: Wed, 05 Feb 2014 12:13:28 +0100
From: Tomas Sedovic <tsedovic at redhat.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure
        Management UI - Icehouse scoped wireframes
Message-ID: <52F21CD8.2030600 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

<snip>
>> 1. Are we doing node tags (page 4) for the first iteration? Where are
>> they going to live?
> Yes, it's very easy to do, already part of Ironic.

Cool!

>
>> 2. There are multiple node profiles per role on pages 11, 12, 17. Is
>> that just an oversight or do you intend on keeping those in? I though
>> the consensus was to do 1 node profile per deployment role.
> I tried to avoid the confusion by the comment:
> '- showing final state of MVP
>   - first iteration contains only last row (no node definition link)'

I'm sorry, I completely missed that comment. Thanks for the clarification.

>
> Maybe I should be more clear. By last row I meant that in the first
> iteration, the form will contain only one row with dropdown to select
> only one flavor per role.
>
> I intend to keep multiple roles for Icehouse scope. We will see if we
> can get there in time, I am hoping for 'yes'. But I am absolutely
> aligned with the consensus that we are starting only one node profile
> per role.
>
> -- Jarda
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>




------------------------------

Message: 26
Date: Wed, 5 Feb 2014 15:17:34 +0400
From: Ilya Kharin <ikharin at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [rally] Proposing changes in Rally core
        team
Message-ID:
        <CA+FVv8XA+KJJNDb+dKfHm9+dKZ2cHywJSLJVU7XQ8K7zmWtWXw at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

+1 for Hugh


On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick <sskripnick at mirantis.com>wrote:

>
> +1 for Hugh, but IMO no need to rush with Alexei's removal
>
> Hi stackers,
>
> I would like to:
>
> 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews
> (and always testing patches=) ):
> http://stackalytics.com/report/reviews/rally/30
>
> 2) Remove Alexei from core team, because unfortunately he is not able to
> work on Rally at this moment. Thank you Alexei for all work that you have
> done.
>
>
> Thoughts?
>
>
> Best regards,
> Boris Pavlovic
>
>
> --
> Regards,
> Sergey Skripnick
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/504991a1/attachment-0001.html>

------------------------------

Message: 27
Date: Wed, 5 Feb 2014 03:30:58 -0800
From: Vishvananda Ishaya <vishvananda at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical
        Multitenancy Discussion
Message-ID: <522D738E-A7F0-4875-A85D-620D1FFF928F at gmail.com>
Content-Type: text/plain; charset="utf-8"


On Feb 5, 2014, at 2:38 AM, Florent Flament <florent.flament-ext at cloudwatt.com> wrote:

> Hi Vish,
>
> You're approach looks very interesting. I especially like the idea of 'walking the tree of parent projects, to construct the set of roles'.
>
> Here are some issues that came to my mind:
>
>
> Regarding policy rules enforcement:
>
> Considering the following projects:
> * orga
> * orga.projecta
> * orga.projectb
>
> Let's assume that Joe has the following roles:
> * `Member` of `orga`
> * `admin` of `orga.projectb`
>
> Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some user on `orga.projectb` (which rights he has). He would like to be able to do all of this with the same token (scoped on project `orga`?).
>
> For this scenario to be working, we would need to be able to store multiple roles (a tree of roles?) in the token, so that services would know which role is granted to the user on which project.
>
> In a first time, I guess we could stay with the roles scoped to a unique project. Joe would be able to do what he wants, by getting a first token on `orga` or `orga.projecta` with a `Member` role, then a second token on `orga.projectb` with the `admin` role.

This is a good point, having different roles on different levels of the hierarchy does lead to having to reauthenticate for certain actions. Keystone could pass the scope along with each role instead of a single global scope. The policy check in this could be modifed to do matching on role && prefix against the scope of ther role so policy like:

?remove_user_from_project?: ?role:project_admin and scope_prefix:project_id?

This starts to get complex and unwieldy however because a single token allows you to do anything and everything based on your roles. I think we need a healthy balance between ease of use and the principle of least privilege, so we might be best to stick to a single scope for each token and force a reauthentication to do adminy stuff in projectb.

>
>
> Considering quotas enforcement:
>
> Let's say we wants set the following limits:
>
> * `orga` : max 10 VMs
> * ? orga.projecta` : max 8 VMs
> * `orga.projectb` : max 8 VMs
>
> The idea would be that the `admin` of `orga` wishes to allow 8 VMs to projects ?`orga.projecta` or `orga.projectb`, but doesn't care how these VMs are spread. Although he wishes to keep 2 VMs in `orga` for himself.

This seems like a bit of a stretch as a use case. Sharing a set of quotas across two projects seems strange and if we did have arbitrary nesting you could do the same by sticking a dummy project in between

orga: max 10
orga.dummy: max 8
orga.dummy.projecta: no max
orga.dummy.projectb: no max
>
> Then to be able to enforce these quotas, Nova (and all other services) would have to keep track of the tree of quotas, and update the appropriate nodes.
>
>
> By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, ...) would just have to ask this centralized access management service whether an action is authorized for a given token?

So I threw out the idea the other day that quota enforcement should perhaps be done by gantt. Quotas seem to be a scheduling concern more than anything else.
>
> Florent Flament
>
>
>
> ----- Original Message -----
> From: "Vishvananda Ishaya" <vishvananda at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
> Sent: Monday, February 3, 2014 10:58:28 PM
> Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy     Discussion
>
> Hello Again!
>
> At the meeting last week we discussed some options around getting true multitenancy in nova. The use case that we are trying to support can be described as follows:
>
> "Martha, the owner of ProductionIT provides it services to multiple Enterprise clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and Development teams with many users. Joe needs the ability create users, projects, and quotas, as well as the ability to list and delete resources across WidgetMaster. Martha needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, projects, and objects across the entire system; and set quotas for the client companies as a whole. She also needs to ensure that Joe can't see or mess with anything owned by Sam."
>
> As per the plan I outlined in the meeting I have implemented a Proof-of-Concept that would allow me to see what changes were required in nova to get scoped tenancy working. I used a simple approach of faking out heirarchy by prepending the id of the larger scope to the id of the smaller scope. Keystone uses uuids internally, but for ease of explanation I will pretend like it is using the name. I think we can all agree that ?orga.projecta? is more readable than ?b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8?.
>
> The code basically creates the following five projects:
>
> orga
> orga.projecta
> orga.projectb
> orgb
> orgb.projecta
>
> I then modified nova to replace everywhere where it searches or limits policy by project_id to do a prefix match. This means that someone using project ?orga? should be able to list/delete instances in orga, orga.projecta, and orga.projectb.
>
> You can find the code here:
>
>  https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
>  https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f
>
> Keeping in mind that this is a prototype, but I?m hoping to come to some kind of consensus as to whether this is a reasonable approach. I?ve compiled a list of pros and cons.
>
> Pros:
>
>  * Very easy to understand
>  * Minimal changes to nova
>  * Good performance in db (prefix matching uses indexes)
>  * Could be extended to cover more complex scenarios like multiple owners or multiple scopes
>
> Cons:
>
>  * Nova has no map of the hierarchy
>  * Moving projects would require updates to ownership inside of nova
>  * Complex scenarios involving delegation of roles may be a bad fit
>  * Database upgrade to hierarchy could be tricky
>
> If this seems like a reasonable set of tradeoffs, there are a few things that need to be done inside of nova bring this to a complete solution:
>
>  * Prefix matching needs to go into oslo.policy
>  * Should the tenant_id returned by the api reflect the full ?orga.projecta?, or just the child ?projecta? or match the scope: i.e. the first if you are authenticated to orga and the second if you are authenticated to the project?
>  * Possible migrations for existing project_id fields
>  * Use a different field for passing ownership scope instead of overloading project_id
>  * Figure out how nested quotas should work
>  * Look for other bugs relating to scoping
>
> Also, we need to decide how keystone should construct and pass this information to the services. The obvious case that could be supported today would be to allow a single level of hierarchy using domains. For example, if domains are active, keystone could pass domain.project_id for ownership_scope. This could be controversial because potentially domains are just for grouping users and shouldn?t be applied to projects.
>
> I think the real value of this approach would be to allow nested projects with role inheritance. When keystone is creating the token, it could walk the tree of parent projects, construct the set of roles, and construct the ownership_scope as it walks to the root of the tree.
>
> Finally, similar fixes will need to be made in the other projects to bring this to a complete solution.
>
> Please feel free to respond with any input, and we will be having another Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.
>
> Vish
>
> On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
>
>> Hi Everyone,
>>
>> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>>
>> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>>
>> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>>
>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>>
>> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>>
>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>>
>> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>>
>> Vish
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/96591f82/attachment-0001.pgp>

------------------------------

Message: 28
Date: Wed, 5 Feb 2014 15:36:47 +0400
From: Dina Belova <dbelova at mirantis.com>
To: OpenStack Development Mailing List
        <openstack-dev at lists.openstack.org>
Subject: [openstack-dev] [Climate] 0.1.0 release
Message-ID:
        <CACsCO2zJaHVnwnMjUCindhDnC4ns25aQW7525pHjFOWP_zNpfA at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hi, folks!

Today Climate has been released first time and I'm really glad to say that
:)

This release implements following use cases:

   - User wants to reserve virtual machine and use it later. He/she asks
   Nova to create server, passing special hints, describing information like
   lease start and end time. In this case instance will be not just booted,
   but also shelved not to use cloud resources when it's not needed. At the
   time user passed as 'lease start time' instance will be unshelled and used
   as user wants to. User may define different actions that might happen to
   instance at lease end - like snapshoting or/and suspending or/and removal.
   - User wants to reserve compute capacity of whole compute host to use it
   later. In this case he/she asks Climate to provide host with passed
   characteristics from predefined pool of hosts (that is managed by admin
   user). If this request might be processed, user will have the opportunity
   run his/her instances on reserved host when lease starts.


Here are our release notes:
Climate/Release_Notes/0.1.0<https://wiki.openstack.org/wiki/Climate/Release_Notes/0.1.0>

Other useful links:

   - Climate Wiki <https://wiki.openstack.org/wiki/Climate>
   - Climate Launchpad <https://launchpad.net/climate>
   - Future plans for 0.2.x <https://etherpad.openstack.org/p/climate-0.2>


Thanks all team who worked on Climate 0.1.0 and everybody who helped us!

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/3a34557c/attachment-0001.html>

------------------------------

Message: 29
Date: Wed, 5 Feb 2014 03:38:41 -0800
From: Vishvananda Ishaya <vishvananda at gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical
        Multitenancy Discussion
Message-ID: <41CD3A0D-E5C8-4F7F-B5E4-32538417CF05 at gmail.com>
Content-Type: text/plain; charset="windows-1252"


On Feb 5, 2014, at 12:27 AM, Chris Behrens <cbehrens at codestud.com> wrote:

>
> Hi Vish,
>
> I?m jumping in slightly late on this, but I also have an interest in this. I?m going to preface this by saying that I have not read this whole thread yet, so I apologize if I repeat things, say anything that is addressed by previous posts, or doesn?t jive with what you?re looking for. :) But what you describe below sounds like exactly a use case I?d come up with.
>
> Essentially I want another level above project_id. Depending on the exact use case, you could name it ?wholesale_id? or ?reseller_id?...and yeah, ?org_id? fits in with your example. :) I think that I had decided I?d call it ?domain? to be more generic, especially after seeing keystone had a domain concept.
>
> Your idea below (prefixing the project_id) is exactly one way I thought of doing this to be least intrusive. I, however, thought that this would not be efficient. So, I was thinking about proposing that we add ?domain? to all of our models. But that limits your hierarchy and I don?t necessarily like that. :)  So I think that if the queries are truly indexed as you say below, you have a pretty good approach. The one issue that comes into mind is that if there?s any chance of collision. For example, if project ids (or orgs) could contain a ?.?, then ?.? as a delimiter won?t work.
>
> My requirements could be summed up pretty well by thinking of this as ?virtual clouds within a cloud?. Deploy a single cloud infrastructure that could look like many multiple clouds. ?domain? would be the key into each different virtual cloud. Accessing one virtual cloud doesn?t reveal any details about another virtual cloud.
>
> What this means is:
>
> 1) domain ?a? cannot see instances (or resources in general) in domain ?b?. It doesn?t matter if domain ?a? and domain ?b? share the same tenant ID. If you act with the API on behalf of domain ?a?, you cannot see your instances in domain ?b?.
> 2) Flavors per domain. domain ?a? can have different flavors than domain ?b?.

I hadn?t thought of this one, but we do have per-project flavors so I think this could work in a project hierarchy world. We might have to rethink the idea of global flavors and just stick them in the top-level project. That way the flavors could be removed. The flavor list would have to be composed by matching all parent projects. It might make sense to have an option for flavors to be ?hidden" in sub projects somehow as well. In other words if orgb wants to delete a flavor from the global list they could do it by hiding the flavor.

Definitely some things to be thought about here.

> 3) Images per domain. domain ?a? could see different images than domain ?b?.

Yes this would require similar hierarchical support in glance.

> 4) Quotas and quota limits per domain. your instances in domain ?a? don?t count against quotas in domain ?b?.

Yes we?ve talked about quotas for sure. This is definitely needed.

> 5) Go as far as using different config values depending on what domain you?re using. This one is fun. :)

Curious for some examples here.

>
> etc.
>
> I?m not sure if you were looking to go that far or not. :) But I think that our ideas are close enough, if not exact, that we can achieve both of our goals with the same implementation.
>
> I?d love to be involved with this. I am not sure that I currently have the time to help with implementation, however.

Come to the meeting on friday! 1600 UTC

Vish

>
> - Chris
>
>
>
> On Feb 3, 2014, at 1:58 PM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
>
>> Hello Again!
>>
>> At the meeting last week we discussed some options around getting true multitenancy in nova. The use case that we are trying to support can be described as follows:
>>
>> "Martha, the owner of ProductionIT provides it services to multiple Enterprise clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and Development teams with many users. Joe needs the ability create users, projects, and quotas, as well as the ability to list and delete resources across WidgetMaster. Martha needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, projects, and objects across the entire system; and set quotas for the client companies as a whole. She also needs to ensure that Joe can't see or mess with anything owned by Sam."
>>
>> As per the plan I outlined in the meeting I have implemented a Proof-of-Concept that would allow me to see what changes were required in nova to get scoped tenancy working. I used a simple approach of faking out heirarchy by prepending the id of the larger scope to the id of the smaller scope. Keystone uses uuids internally, but for ease of explanation I will pretend like it is using the name. I think we can all agree that ?orga.projecta? is more readable than ?b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8?.
>>
>> The code basically creates the following five projects:
>>
>> orga
>> orga.projecta
>> orga.projectb
>> orgb
>> orgb.projecta
>>
>> I then modified nova to replace everywhere where it searches or limits policy by project_id to do a prefix match. This means that someone using project ?orga? should be able to list/delete instances in orga, orga.projecta, and orga.projectb.
>>
>> You can find the code here:
>>
>> https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
>> https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f
>>
>> Keeping in mind that this is a prototype, but I?m hoping to come to some kind of consensus as to whether this is a reasonable approach. I?ve compiled a list of pros and cons.
>>
>> Pros:
>>
>> * Very easy to understand
>> * Minimal changes to nova
>> * Good performance in db (prefix matching uses indexes)
>> * Could be extended to cover more complex scenarios like multiple owners or multiple scopes
>>
>> Cons:
>>
>> * Nova has no map of the hierarchy
>> * Moving projects would require updates to ownership inside of nova
>> * Complex scenarios involving delegation of roles may be a bad fit
>> * Database upgrade to hierarchy could be tricky
>>
>> If this seems like a reasonable set of tradeoffs, there are a few things that need to be done inside of nova bring this to a complete solution:
>>
>> * Prefix matching needs to go into oslo.policy
>> * Should the tenant_id returned by the api reflect the full ?orga.projecta?, or just the child ?projecta? or match the scope: i.e. the first if you are authenticated to orga and the second if you are authenticated to the project?
>> * Possible migrations for existing project_id fields
>> * Use a different field for passing ownership scope instead of overloading project_id
>> * Figure out how nested quotas should work
>> * Look for other bugs relating to scoping
>>
>> Also, we need to decide how keystone should construct and pass this information to the services. The obvious case that could be supported today would be to allow a single level of hierarchy using domains. For example, if domains are active, keystone could pass domain.project_id for ownership_scope. This could be controversial because potentially domains are just for grouping users and shouldn?t be applied to projects.
>>
>> I think the real value of this approach would be to allow nested projects with role inheritance. When keystone is creating the token, it could walk the tree of parent projects, construct the set of roles, and construct the ownership_scope as it walks to the root of the tree.
>>
>> Finally, similar fixes will need to be made in the other projects to bring this to a complete solution.
>>
>> Please feel free to respond with any input, and we will be having another Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.
>>
>> Vish
>>
>> On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
>>
>>> Hi Everyone,
>>>
>>> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>>>
>>> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>>>
>>> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>>>
>>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>>>
>>> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>>>
>>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>>>
>>> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>>>
>>> Vish
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/17643f55/attachment-0001.pgp>

------------------------------

Message: 30
Date: Wed, 5 Feb 2014 15:44:05 +0400
From: Sergey Lukjanov <slukjanov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Climate] 0.1.0 release
Message-ID:
        <CA+GZd79xa4piAQfMRu2eU7CQAceL8oKDc7UAeCi-6-YZBH=Q2w at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Great progress!

My congratulations.


On Wed, Feb 5, 2014 at 3:36 PM, Dina Belova <dbelova at mirantis.com> wrote:

> Hi, folks!
>
> Today Climate has been released first time and I'm really glad to say that
> :)
>
> This release implements following use cases:
>
>    - User wants to reserve virtual machine and use it later. He/she asks
>    Nova to create server, passing special hints, describing information like
>    lease start and end time. In this case instance will be not just booted,
>    but also shelved not to use cloud resources when it's not needed. At the
>    time user passed as 'lease start time' instance will be unshelled and used
>    as user wants to. User may define different actions that might happen to
>    instance at lease end - like snapshoting or/and suspending or/and removal.
>    - User wants to reserve compute capacity of whole compute host to use
>    it later. In this case he/she asks Climate to provide host with passed
>    characteristics from predefined pool of hosts (that is managed by admin
>    user). If this request might be processed, user will have the opportunity
>    run his/her instances on reserved host when lease starts.
>
>
> Here are our release notes: Climate/Release_Notes/0.1.0<https://wiki.openstack.org/wiki/Climate/Release_Notes/0.1.0>
>
> Other useful links:
>
>    - Climate Wiki <https://wiki.openstack.org/wiki/Climate>
>    - Climate Launchpad <https://launchpad.net/climate>
>    - Future plans for 0.2.x <https://etherpad.openstack.org/p/climate-0.2>
>
>
> Thanks all team who worked on Climate 0.1.0 and everybody who helped us!
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/3804f775/attachment-0001.html>

------------------------------

Message: 31
Date: Wed, 5 Feb 2014 12:47:24 +0100
From: Ralf Haferkamp <rhafer at suse.de>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] backporting database migrations
        to      stable/havana
Message-ID: <20140205114724.GC4724 at suse.de>
Content-Type: text/plain; charset=us-ascii

Hi,

On Tue, Feb 04, 2014 at 12:36:16PM -0500, Miguel Angel Ajo Pelayo wrote:
>
>
> Hi Ralf, I see we're on the same boat for this.
>
>    It seems that a database migration introduces complications
> for future upgrades. It's not an easy path.
>
>    My aim when I started this backport was trying to scale out
> neutron-server, starting several ones together. But I'm afraid
> we would find more bugs like this requiring db migrations.
>
>    Have you actually tested running multiple servers in icehouse?,
> I just didn't have the time, but it's in my roadmap.
I actually ran into the bug in a single server setup. But that seems to happen
pretty rarely.

>    If that fixes the problem, may be some heavier approach (like
> table locking) could be used in the backport, without introducing
> a new/conflicting migration.
Hm, there seems to be no clean way to do table locking in sqlalchemy. At least I
didn't find one.

> About the DB migration backport problem, the actual problem is:
[..]
> 1st step) fix E in icehouse to skip the real unique constraint insertion if it does already exist:
>
> havana   | icehouse
>          |
> A<-B<-C<-|--D<-*E*<-F
>
> 2nd step) insert E2 in the middle of B and C to keep the icehouse first reference happy:
>
> havana      | icehouse
>             |
> A<-B<-E<-C<-|--D<-*E*<-F
>
> What do you think?
I agree, that would likely be the right fix. But as it seems there are some
(more or less) strict rules about stable backports of migrations (which I
understand as it can get really tricky). So a solution that doesn't require
them would probabyl be preferable.

> ----- Original Message -----
> > From: "Ralf Haferkamp" <rhafer at suse.de>
> > To: openstack-dev at lists.openstack.org
> > Sent: Tuesday, February 4, 2014 4:02:36 PM
> > Subject: [openstack-dev] [Neutron] backporting database migrations to       stable/havana
> >
> > Hi,
> >
> > I am currently trying to backport the fix for
> > https://launchpad.net/bugs/1254246 to stable/havana. The current state of
> > that
> > is here: https://review.openstack.org/#/c/68929/
> >
> > However, the fix requires a database migration to be applied (to add a unique
> > constraint to the agents table). And the current fix linked above will AFAIK
> > break havana->icehouse migrations. So I wonder what would be the correct way
> > to
> > do backport database migrations in neutron using alembic? Is there even a
> > correct way, or are backports of database migrations a no go?
> >
> > --
> > regards,
> >     Ralf

--
Ralf



------------------------------

Message: 32
Date: Wed, 05 Feb 2014 12:48:00 +0100
From: Thierry Carrez <thierry at openstack.org>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] Asynchrounous programming: replace
        eventlet with asyncio
Message-ID: <52F224F0.4070707 at openstack.org>
Content-Type: text/plain; charset=ISO-8859-1

victor stinner wrote:
> [...]
> The problem is that the asyncio module was written for Python 3.3, whereas OpenStack is not fully Python 3 compatible (yet). To easy the transition I have ported asyncio on Python 2, it's the new Trollis project which supports Python 2.6-3.4:
>    https://bitbucket.org/enovance/trollius
> [...]

How much code from asyncio did you reuse ? How deep was the porting
effort ? Is the port maintainable as asyncio gets more bugfixes over time ?

> The Trollius API is the same than asyncio, the main difference is the syntax in coroutines: "yield from task" must be written "yield task", and "return value" must be written "raise Return(value)".

Could we use a helper library (like six) to have the same syntax in Py2
and Py3 ? Something like "from six.asyncio import yield_from,
return_task" and use those functions for py2/py3 compatible syntax ?

--
Thierry Carrez (ttx)



------------------------------

Message: 33
Date: Wed, 5 Feb 2014 15:49:12 +0400
From: Oleg Gelbukh <ogelbukh at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [Climate] 0.1.0 release
Message-ID:
        <CAFkLEwpaPEcZ8AXWxfivBhvGrw9eLRy=GoHnzKLyE84_V=3sHg at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Congrats to all Climate team members who made it happen, great job!

--
Oleg Gelbukh


On Wed, Feb 5, 2014 at 3:36 PM, Dina Belova <dbelova at mirantis.com> wrote:

> Hi, folks!
>
> Today Climate has been released first time and I'm really glad to say that
> :)
>
> This release implements following use cases:
>
>    - User wants to reserve virtual machine and use it later. He/she asks
>    Nova to create server, passing special hints, describing information like
>    lease start and end time. In this case instance will be not just booted,
>    but also shelved not to use cloud resources when it's not needed. At the
>    time user passed as 'lease start time' instance will be unshelled and used
>    as user wants to. User may define different actions that might happen to
>    instance at lease end - like snapshoting or/and suspending or/and removal.
>    - User wants to reserve compute capacity of whole compute host to use
>    it later. In this case he/she asks Climate to provide host with passed
>    characteristics from predefined pool of hosts (that is managed by admin
>    user). If this request might be processed, user will have the opportunity
>    run his/her instances on reserved host when lease starts.
>
>
> Here are our release notes: Climate/Release_Notes/0.1.0<https://wiki.openstack.org/wiki/Climate/Release_Notes/0.1.0>
>
> Other useful links:
>
>    - Climate Wiki <https://wiki.openstack.org/wiki/Climate>
>    - Climate Launchpad <https://launchpad.net/climate>
>    - Future plans for 0.2.x <https://etherpad.openstack.org/p/climate-0.2>
>
>
> Thanks all team who worked on Climate 0.1.0 and everybody who helped us!
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/e9eb541b/attachment-0001.html>

------------------------------

Message: 34
Date: Wed, 5 Feb 2014 15:50:23 +0400
From: Sergey Lukjanov <slukjanov at mirantis.com>
To: Jay Pipes <jaypipes at gmail.com>
Cc: "sukhdev at aristanetworks.com" <sukhdev at aristanetworks.com>,
        OpenStack Development Mailing List
        <openstack-dev at lists.openstack.org>, "&lt,
        openstack-infra at lists.openstack.org&gt, "
        <openstack-infra at lists.openstack.org>
Subject: Re: [openstack-dev] [OpenStack-Infra]
        [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will
        not fulfill requirements of 3rd party testing
Message-ID:
        <CA+GZd7_hb7iQtsxXR5DSoD0Aaa9jPfGmvyGmHgaaXzdjJYz_FQ at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hi Jay,

it's really very easy to setup Zuul for it (we're using one for Savanna CI).

There are some useful links:

* check pipeline as an example of zuul layout configuration -
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml#L5
* zuul docs - http://ci.openstack.org/zuul/
* zuul config sample -
https://github.com/openstack-infra/zuul/blob/master/etc/zuul.conf-sample

So, I think that it could be easy enough to setup Zuul for 3rd party
testing, but it'll be better to have some doc about it.

Thanks.


On Wed, Feb 5, 2014 at 3:55 AM, Jay Pipes <jaypipes at gmail.com> wrote:

> Sorry for cross-posting to both mailing lists, but there's lots of folks
> working on setting up third-party testing platforms that are not members
> of the openstack-infra ML...
>
> tl;dr
> -----
>
> The third party testing documentation [1] has requirements [2] that
> include the ability to trigger a recheck based on a gerrit comment.
>
> Unfortunately, the Gerrit Jenkins Trigger plugin [3] does not have the
> ability to trigger job runs based on a regex-filtered comment (only on
> the existence of any new comment to the code review).
>
> Therefore, we either should:
>
> a) Relax the requirement that the third party system trigger test
> re-runs when a comment including the word "recheck" appears in the
> Gerrit event stream
>
> b) Modify the Jenkins Gerrit plugin to support regex filtering on the
> comment text (in the same way that it currently supports regex filtering
> on the project name)
>
> or
>
> c) Add documentation to the third party testing pages that explains how
> to use Zuul as a replacement for the Jenkins Gerrit plugin.
>
> I propose we do a) for the short term, and I'll work on c) long term.
> However, I'm throwing this out there just in case there are some Java
> and Jenkins whizzes out there that could get b) done in a jiffy.
>
> details
> -------
>
> OK, so I've been putting together documentation on how to set up an
> external Jenkins platform that is "linked" [4] with the upstream
> OpenStack CI system.
>
> Recently, I wrote an article detailing how the upstream CI system
> worked, including a lot of the gory details from the
> openstack-infra/config project's files. [5]
>
> I've been working on a follow-up article that goes through how to set up
> a Jenkins system, and in writing that article, I created a source
> repository [6] that contains scripts, instructions and Puppet modules
> that set up a Jenkins system, the Jenkins Job Builder tool, and
> installs/configures the Jenkins Gerrit plugin [7].
>
> I planned to use the Jenkins Gerrit plugin as the mechanism that
> triggers Jenkins jobs on the external system based on gerrit events
> published by the OpenStack review.openstack.org Gerrit service. In
> addition to being mentioned in the third party documentation, Jenkins
> Job Builder has the ability to construct Jenkins jobs that are triggered
> by the Jenkins Gerrit plugin [8].
>
> Unforunately, I've run into a bit of a snag.
>
> The third party testing documentation has requirements that include the
> ability to trigger a recheck based on a gerrit comment:
>
> <quote>
> Support recheck to request re-running a test.
>  * Support the following syntaxes recheck no bug and recheck bug ###.
>  * Recheck means recheck everything. A single recheck comment should
> re-trigger all testing systems.
> </quote>
>
> The documentation has a section on using the Gerrit Jenkins Trigger
> plugin [3] to accept notifications from the upstream OpenStack Gerrit
> instance.
>
> But unfortunately, the Jenkins Gerrit plugin does not support the
> ability to trigger a re-run of a job given a regex match of the word
> "recheck". :(
>
> So, we either need to a) change the requirements of third party testers,
> b) enhance the Jenkins Gerrit plugin with the missing functionality, or
> c) add documentation on how to set up Zuul as the triggering system
> instead of the Jenkins Gerrit plugin.
>
> I'm happy to work on c), but I think relaxing the restriction (a) is
> probably needed short-term.
>
> Best,
> -jay
>
> [1] http://ci.openstack.org/third_party.html
> [2] http://ci.openstack.org/third_party.html#requirements
> [3]
>
> http://ci.openstack.org/third_party.html#the-jenkins-gerrit-trigger-plugin-way
> [4] By "linked" I mean it both reads from the OpenStack Gerrit system
> and writes (adds comments) to it
> [5] http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/
> [6] http://github.com/jaypipes/os-ext-testing
> [7] https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
> [8]
>
> https://github.com/openstack-infra/jenkins-job-builder/blob/master/jenkins_jobs/modules/triggers.py#L121
>
>
>
> _______________________________________________
> OpenStack-Infra mailing list
> OpenStack-Infra at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>



--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/f1ab3991/attachment-0001.html>

------------------------------

Message: 35
Date: Wed, 5 Feb 2014 15:53:43 +0400
From: Sergey Lukjanov <slukjanov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] savann-ci, Re: [savanna] Alembic
        migrations and absence of DROP column in sqlite
Message-ID:
        <CA+GZd7_a7VjOUwFLiyi9NapFmQZNO-uf+usy9TEC+WCCQ0nD1g at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Agreed, let's move on to the MySQL for savanna-ci to run integration tests
against production-like DB.


On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev <alazarev at mirantis.com>wrote:

> Since sqlite is not in the list of "databases that would be used in
> production", CI should use other DB for testing.
>
> Andrew.
>
>
> On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov <aignatov at mirantis.com>wrote:
>
>> Indeed. We should create a bug around that and move our savanna-ci to
>> mysql.
>>
>> Regards,
>> Alexander Ignatov
>>
>>
>>
>> On 05 Feb 2014, at 01:01, Trevor McKay <tmckay at redhat.com> wrote:
>>
>> > This brings up an interesting problem:
>> >
>> > In https://review.openstack.org/#/c/70420/ I've added a migration that
>> > uses a drop column for an upgrade.
>> >
>> > But savann-ci is apparently using a sqlite database to run.  So it can't
>> > possibly pass.
>> >
>> > What do we do here?  Shift savanna-ci tests to non sqlite?
>> >
>> > Trevor
>> >
>> > On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote:
>> >> Hi all,
>> >>
>> >> My two cents.
>> >>
>> >>> 2) Extend alembic so that op.drop_column() does the right thing
>> >> We could, but should we?
>> >>
>> >> The only reason alembic doesn't support these operations for SQLite
>> >> yet is that SQLite lacks proper support of ALTER statement. For
>> >> sqlalchemy-migrate we've been providing a work-around in the form of
>> >> recreating of a table and copying of all existing rows (which is a
>> >> hack, really).
>> >>
>> >> But to be able to recreate a table, we first must have its definition.
>> >> And we've been relying on SQLAlchemy schema reflection facilities for
>> >> that. Unfortunately, this approach has a few drawbacks:
>> >>
>> >> 1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
>> >> unique constraints, which means the recreated table won't have them;
>> >>
>> >> 2) special care must be taken in 'edge' cases (e.g. when you want to
>> >> drop a BOOLEAN column, you must also drop the corresponding CHECK (col
>> >> in (0, 1)) constraint manually, or SQLite will raise an error when the
>> >> table is recreated without the column being dropped)
>> >>
>> >> 3) special care must be taken for 'custom' type columns (it's got
>> >> better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
>> >> definitions of reflected BIGINT columns manually for each
>> >> column.drop() call)
>> >>
>> >> 4) schema reflection can't be performed when alembic migrations are
>> >> run in 'offline' mode (without connecting to a DB)
>> >> ...
>> >> (probably something else I've forgotten)
>> >>
>> >> So it's totally doable, but, IMO, there is no real benefit in
>> >> supporting running of schema migrations for SQLite.
>> >>
>> >>> ...attempts to drop schema generation based on models in favor of
>> migrations
>> >>
>> >> As long as we have a test that checks that the DB schema obtained by
>> >> running of migration scripts is equal to the one obtained by calling
>> >> metadata.create_all(), it's perfectly OK to use model definitions to
>> >> generate the initial DB schema for running of unit-tests as well as
>> >> for new installations of OpenStack (and this is actually faster than
>> >> running of migration scripts). ... and if we have strong objections
>> >> against doing metadata.create_all(), we can always use migration
>> >> scripts for both new installations and upgrades for all DB backends,
>> >> except SQLite.
>> >>
>> >> Thanks,
>> >> Roman
>> >>
>> >> On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
>> >> <enikanorov at mirantis.com> wrote:
>> >>> Boris,
>> >>>
>> >>> Sorry for the offtopic.
>> >>> Is switching to model-based schema generation is something decided? I
>> see
>> >>> the opposite: attempts to drop schema generation based on models in
>> favor of
>> >>> migrations.
>> >>> Can you point to some discussion threads?
>> >>>
>> >>> Thanks,
>> >>> Eugene.
>> >>>
>> >>>
>> >>>
>> >>> On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic <
>> bpavlovic at mirantis.com>
>> >>> wrote:
>> >>>>
>> >>>> Jay,
>> >>>>
>> >>>> Yep we shouldn't use migrations for sqlite at all.
>> >>>>
>> >>>> The major issue that we have now is that we are not able to ensure
>> that DB
>> >>>> schema created by migration & models are same (actually they are not
>> same).
>> >>>>
>> >>>> So before dropping support of migrations for sqlite & switching to
>> model
>> >>>> based created schema we should add tests that will check that model &
>> >>>> migrations are synced.
>> >>>> (we are working on this)
>> >>>>
>> >>>>
>> >>>>
>> >>>> Best regards,
>> >>>> Boris Pavlovic
>> >>>>
>> >>>>
>> >>>> On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev <
>> alazarev at mirantis.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Trevor,
>> >>>>>
>> >>>>> Such check could be useful on alembic side too. Good opportunity for
>> >>>>> contribution.
>> >>>>>
>> >>>>> Andrew.
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay <tmckay at redhat.com>
>> wrote:
>> >>>>>>
>> >>>>>> Okay,  I can accept that migrations shouldn't be supported on
>> sqlite.
>> >>>>>>
>> >>>>>> However, if that's the case then we need to fix up
>> savanna-db-manage so
>> >>>>>> that it checks the db connection info and throws a polite error to
>> the
>> >>>>>> user for attempted migrations on unsupported platforms. For
>> example:
>> >>>>>>
>> >>>>>> "Database migrations are not supported for sqlite"
>> >>>>>>
>> >>>>>> Because, as a developer, when I see a sql error trace as the
>> result of
>> >>>>>> an operation I assume it's broken :)
>> >>>>>>
>> >>>>>> Best,
>> >>>>>>
>> >>>>>> Trevor
>> >>>>>>
>> >>>>>> On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
>> >>>>>>> On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
>> >>>>>>>> I was playing with alembic migration and discovered that
>> >>>>>>>> op.drop_column() doesn't work with sqlite.  This is because
>> sqlite
>> >>>>>>>> doesn't support dropping a column (broken imho, but that's
>> another
>> >>>>>>>> discussion).  Sqlite throws a syntax error.
>> >>>>>>>>
>> >>>>>>>> To make this work with sqlite, you have to copy the table to a
>> >>>>>>>> temporary
>> >>>>>>>> excluding the column(s) you don't want and delete the old one,
>> >>>>>>>> followed
>> >>>>>>>> by a rename of the new table.
>> >>>>>>>>
>> >>>>>>>> The existing 002 migration uses op.drop_column(), so I'm assuming
>> >>>>>>>> it's
>> >>>>>>>> broken, too (I need to check what the migration test is doing).
>>  I
>> >>>>>>>> was
>> >>>>>>>> working on an 003.
>> >>>>>>>>
>> >>>>>>>> How do we want to handle this?  Three good options I can think
>> of:
>> >>>>>>>>
>> >>>>>>>> 1) don't support migrations for sqlite (I think "no", but maybe)
>> >>>>>>>>
>> >>>>>>>> 2) Extend alembic so that op.drop_column() does the right thing
>> >>>>>>>> (more
>> >>>>>>>> open-source contributions for us, yay :) )
>> >>>>>>>>
>> >>>>>>>> 3) Add our own wrapper in savanna so that we have a drop_column()
>> >>>>>>>> method
>> >>>>>>>> that wraps copy/rename.
>> >>>>>>>>
>> >>>>>>>> Ideas, comments?
>> >>>>>>>
>> >>>>>>> Migrations should really not be run against SQLite at all -- only
>> on
>> >>>>>>> the
>> >>>>>>> databases that would be used in production. I believe the general
>> >>>>>>> direction of the contributor community is to be consistent around
>> >>>>>>> testing of migrations and to not run migrations at all in unit
>> tests
>> >>>>>>> (which use SQLite).
>> >>>>>>>
>> >>>>>>> Boris (cc'd) may have some more to say on this topic.
>> >>>>>>>
>> >>>>>>> Best,
>> >>>>>>> -jay
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> _______________________________________________
>> >>>>>>> OpenStack-dev mailing list
>> >>>>>>> OpenStack-dev at lists.openstack.org
>> >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> _______________________________________________
>> >>>>>> OpenStack-dev mailing list
>> >>>>>> OpenStack-dev at lists.openstack.org
>> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> OpenStack-dev mailing list
>> >>>> OpenStack-dev at lists.openstack.org
>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> OpenStack-dev mailing list
>> >>> OpenStack-dev at lists.openstack.org
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >> _______________________________________________
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev at lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/30d202d3/attachment-0001.html>

------------------------------

Message: 36
Date: Wed, 5 Feb 2014 15:58:42 +0400
From: Sergey Lukjanov <slukjanov at mirantis.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [savanna] Specific job type for streaming
        mapreduce? (and someday pipes)
Message-ID:
        <CA+GZd796p69omYkSzUvDugz8MFxnu=yR5mS_Rt-1TcE5P3NqqQ at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

I like the dot-separated name. There are several reasons for it:

* it'll not require changes in all Savanna subprojects;
* eventually we'd like to use not only Oozie for EDP (for example, if we'll
support Twitter Storm) and this new tools could require additional
'subtypes'.

Thanks for catching this.


On Tue, Feb 4, 2014 at 10:47 PM, Trevor McKay <tmckay at redhat.com> wrote:

> Thanks Andrew.
>
> My author thought, which is in between, is to allow dotted types.
> "MapReduce.streaming" for example.
>
> This gives you the subtype flavor but keeps all the APIs the same.
> We just need a wrapper function to separate them when we compare types.
>
> Best,
>
> Trevor
>
> On Mon, 2014-02-03 at 14:57 -0800, Andrew Lazarev wrote:
> > I see two points:
> > * having Savanna types mapped to Oozie action types is intuitive for
> > hadoop users and this is something we would like to keep
> > * it is hard to distinguish different kinds of one job type
> >
> >
> > Adding 'subtype' field will solve both problems. Having it optional
> > will not break backward compatibility. Adding database migration
> > script is also pretty straightforward.
> >
> >
> > Summarizing, my vote is on "subtype" field.
> >
> >
> > Thanks,
> > Andrew.
> >
> >
> > On Mon, Feb 3, 2014 at 2:10 PM, Trevor McKay <tmckay at redhat.com>
> > wrote:
> >
> >         I was trying my best to avoid adding extra job types to
> >         support
> >         mapreduce variants like streaming or mapreduce with pipes, but
> >         it seems
> >         that adding the types is the simplest solution.
> >
> >         On the API side, Savanna can live without a specific job type
> >         by
> >         examining the data in the job record.  Presence/absence of
> >         certain
> >         things, or null values, etc, can provide adequate indicators
> >         to what
> >         kind of mapreduce it is.  Maybe a little bit subtle.
> >
> >         But for the UI, it seems that explicit knowledge of what the
> >         job is
> >         makes things easier and better for the user.  When a user
> >         creates a
> >         streaming mapreduce job and the UI is aware of the type later
> >         on at job
> >         launch, the user can be prompted to provide the right configs
> >         (i.e., the
> >         streaming mapper and reducer values).
> >
> >         The explicit job type also supports validation without having
> >         to add
> >         extra flags (which impacts the savanna client, and the JSON,
> >         etc). For
> >         example, a streaming mapreduce job does not require any
> >         specified
> >         libraries so the fact that it is meant to be a streaming job
> >         needs to be
> >         known at job creation time.
> >
> >         So, to that end, I propose that we add a MapReduceStreaming
> >         job type,
> >         and probably at some point we will have MapReducePiped too.
> >         It's
> >         possible that we might have other job types in the future too
> >         as the
> >         feature set grows.
> >
> >         There was an effort to make Savanna job types parallel Oozie
> >         action
> >         types, but in this case that's just not possible without
> >         introducing a
> >         "subtype" field in the job record, which leads to a database
> >         migration
> >         script and savanna client changes.
> >
> >         What do you think?
> >
> >         Best,
> >
> >         Trevor
> >
> >
> >
> >         _______________________________________________
> >         OpenStack-dev mailing list
> >         OpenStack-dev at lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/85bc0fe1/attachment-0001.html>

------------------------------

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


End of OpenStack-dev Digest, Vol 22, Issue 11
*********************************************

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






------------------------------

Message: 9
Date: Wed, 05 Feb 2014 06:57:14 -0800
From: Dan Smith <dms at danplanet.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests
        broke because of a nova patch
Message-ID: <52F2514A.9030301 at danplanet.com>
Content-Type: text/plain; charset=ISO-8859-1

> We don't have to add a new notification, but we have to add some
> new datas in the nova notifications. At least for the delete
> instance notification to remove the ceilometer nova notifier.
>
> A while ago, I have registered a blueprint that explains which
> datas are missing in the current nova notifications:
>
> https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification
>
>
https://wiki.openstack.org/wiki/Ceilometer/blueprints/remove-ceilometer-nova-notifier

This seems like a much better way to do this.

I'm not opposed to a nova plugin, but if it's something that lives
outside the nova tree, I think there's going to be a problem of
constantly chasing internal API changes. IMHO, a plugin should live
(and be tested) in the nova tree and provide/consume a stableish API
to/from Ceilometer.

So, it seems like we've got the following options:

1. Provide the required additional data in our notifications to avoid
   the need for a plugin to hook into nova internals.
2. Continue to use a plugin in nova to scrape the additional data
   needed during certain events, but hopefully in a way that ties the
   plugin to the internal APIs in a maintainable way.

Is that right?

Personally, I think #1 is far superior to #2.

--Dan



------------------------------

Message: 10
Date: Wed, 5 Feb 2014 10:01:13 -0500
From: Andrew Laski <andrew.laski at rackspace.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical
        Multitenancy Discussion
Message-ID: <20140205150113.GT2672 at crypt>
Content-Type: text/plain; charset=utf-8; format=flowed

On 02/05/14 at 03:30am, Vishvananda Ishaya wrote:
>
>On Feb 5, 2014, at 2:38 AM, Florent Flament <florent.flament-ext at cloudwatt.com> wrote:
>
>> Hi Vish,
>>
>> You're approach looks very interesting. I especially like the idea of 'walking the tree of parent projects, to construct the set of roles'.
>>
>> Here are some issues that came to my mind:
>>
>>
>> Regarding policy rules enforcement:
>>
>> Considering the following projects:
>> * orga
>> * orga.projecta
>> * orga.projectb
>>
>> Let's assume that Joe has the following roles:
>> * `Member` of `orga`
>> * `admin` of `orga.projectb`
>>
>> Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some user on `orga.projectb` (which rights he has). He would like to be able to do all of this with the same token (scoped on project `orga`?).
>>
>> For this scenario to be working, we would need to be able to store multiple roles (a tree of roles?) in the token, so that services would know which role is granted to the user on which project.
>>
>> In a first time, I guess we could stay with the roles scoped to a unique project. Joe would be able to do what he wants, by getting a first token on `orga` or `orga.projecta` with a `Member` role, then a second token on `orga.projectb` with the `admin` role.
>
>This is a good point, having different roles on different levels of the hierarchy does lead to having to reauthenticate for certain actions. Keystone could pass the scope along with each role instead of a single global scope. The policy check in this could be modifed to do matching on role && prefix against the scope of ther role so policy like:
>
>?remove_user_from_project?: ?role:project_admin and scope_prefix:project_id?
>
>This starts to get complex and unwieldy however because a single token allows you to do anything and everything based on your roles. I think we need a healthy balance between ease of use and the principle of least privilege, so we might be best to stick to a single scope for each token and force a reauthentication to do adminy stuff in projectb.
>
>>
>>
>> Considering quotas enforcement:
>>
>> Let's say we wants set the following limits:
>>
>> * `orga` : max 10 VMs
>> * ? orga.projecta` : max 8 VMs
>> * `orga.projectb` : max 8 VMs
>>
>> The idea would be that the `admin` of `orga` wishes to allow 8 VMs to projects ?`orga.projecta` or `orga.projectb`, but doesn't care how these VMs are spread. Although he wishes to keep 2 VMs in `orga` for himself.
>
>This seems like a bit of a stretch as a use case. Sharing a set of quotas across two projects seems strange and if we did have arbitrary nesting you could do the same by sticking a dummy project in between
>
>orga: max 10
>orga.dummy: max 8
>orga.dummy.projecta: no max
>orga.dummy.projectb: no max
>>
>> Then to be able to enforce these quotas, Nova (and all other services) would have to keep track of the tree of quotas, and update the appropriate nodes.
>>
>>
>> By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, ...) would just have to ask this centralized access management service whether an action is authorized for a given token?
>
>So I threw out the idea the other day that quota enforcement should perhaps be done by gantt. Quotas seem to be a scheduling concern more than anything else.

I don't want to take this thread off topic, but I would argue against
this.  I don't want a request for a place to put an instance or volume
to mean that an instance or volume has been created with regards to
quotas.


>>
>> Florent Flament
>>
>>
>>
>> ----- Original Message -----
>> From: "Vishvananda Ishaya" <vishvananda at gmail.com>
>> To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org>
>> Sent: Monday, February 3, 2014 10:58:28 PM
>> Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy    Discussion
>>
>> Hello Again!
>>
>> At the meeting last week we discussed some options around getting true multitenancy in nova. The use case that we are trying to support can be described as follows:
>>
>> "Martha, the owner of ProductionIT provides it services to multiple Enterprise clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and Development teams with many users. Joe needs the ability create users, projects, and quotas, as well as the ability to list and delete resources across WidgetMaster. Martha needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, projects, and objects across the entire system; and set quotas for the client companies as a whole. She also needs to ensure that Joe can't see or mess with anything owned by Sam."
>>
>> As per the plan I outlined in the meeting I have implemented a Proof-of-Concept that would allow me to see what changes were required in nova to get scoped tenancy working. I used a simple approach of faking out heirarchy by prepending the id of the larger scope to the id of the smaller scope. Keystone uses uuids internally, but for ease of explanation I will pretend like it is using the name. I think we can all agree that ?orga.projecta? is more readable than ?b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8?.
>>
>> The code basically creates the following five projects:
>>
>> orga
>> orga.projecta
>> orga.projectb
>> orgb
>> orgb.projecta
>>
>> I then modified nova to replace everywhere where it searches or limits policy by project_id to do a prefix match. This means that someone using project ?orga? should be able to list/delete instances in orga, orga.projecta, and orga.projectb.
>>
>> You can find the code here:
>>
>>  https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
>>  https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f
>>
>> Keeping in mind that this is a prototype, but I?m hoping to come to some kind of consensus as to whether this is a reasonable approach. I?ve compiled a list of pros and cons.
>>
>> Pros:
>>
>>  * Very easy to understand
>>  * Minimal changes to nova
>>  * Good performance in db (prefix matching uses indexes)
>>  * Could be extended to cover more complex scenarios like multiple owners or multiple scopes
>>
>> Cons:
>>
>>  * Nova has no map of the hierarchy
>>  * Moving projects would require updates to ownership inside of nova
>>  * Complex scenarios involving delegation of roles may be a bad fit
>>  * Database upgrade to hierarchy could be tricky
>>
>> If this seems like a reasonable set of tradeoffs, there are a few things that need to be done inside of nova bring this to a complete solution:
>>
>>  * Prefix matching needs to go into oslo.policy
>>  * Should the tenant_id returned by the api reflect the full ?orga.projecta?, or just the child ?projecta? or match the scope: i.e. the first if you are authenticated to orga and the second if you are authenticated to the project?
>>  * Possible migrations for existing project_id fields
>>  * Use a different field for passing ownership scope instead of overloading project_id
>>  * Figure out how nested quotas should work
>>  * Look for other bugs relating to scoping
>>
>> Also, we need to decide how keystone should construct and pass this information to the services. The obvious case that could be supported today would be to allow a single level of hierarchy using domains. For example, if domains are active, keystone could pass domain.project_id for ownership_scope. This could be controversial because potentially domains are just for grouping users and shouldn?t be applied to projects.
>>
>> I think the real value of this approach would be to allow nested projects with role inheritance. When keystone is creating the token, it could walk the tree of parent projects, construct the set of roles, and construct the ownership_scope as it walks to the root of the tree.
>>
>> Finally, similar fixes will need to be made in the other projects to bring this to a complete solution.
>>
>> Please feel free to respond with any input, and we will be having another Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.
>>
>> Vish
>>
>> On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
>>
>>> Hi Everyone,
>>>
>>> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>>>
>>> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>>>
>>> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>>>
>>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>>>
>>> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>>>
>>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>>>
>>> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>>>
>>> Vish
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



>_______________________________________________
>OpenStack-dev mailing list
>OpenStack-dev at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




------------------------------

Message: 11
Date: Wed, 5 Feb 2014 15:03:26 +0000
From: "Robert Li (baoli)" <baoli at cisco.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>, "John Garbutt
        (john at johngarbutt.com)" <john at johngarbutt.com>
Subject: Re: [openstack-dev] The simplified blueprint for PCI extra
        attributes and SR-IOV NIC blueprint
Message-ID: <CF17BC24.40B13%baoli at cisco.com>
Content-Type: text/plain; charset="us-ascii"

Hi John and all,

Yunhong's email mentioned about the SR-IOV NIC support BP:
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov

I'd appreciate your consideration of the approval of both BPs so that we
can have SR-IOV NIC support in Icehouse.

Thanks,
Robert


On 2/4/14 1:36 AM, "Jiang, Yunhong" <yunhong.jiang at intel.com> wrote:

>Hi, John and all,
>       I updated the blueprint
>https://blueprints.launchpad.net/nova/+spec/pci-extra-info-icehouse
>according to your feedback, to add the backward compatibility/upgrade
>issue/examples.
>
>       I try to separate this BP with the SR-IOV NIC support as a standalone
>enhancement, because this requirement is more a generic PCI pass through
>feature, and will benefit other usage scenario as well.
>
>       And the reasons that I want to finish this BP in I release are:
>
>       a) it's a generic requirement, and push it into I release is helpful to
>other scenario.
>       b) I don't see upgrade issue, and the only thing will be discarded in
>future is the PCI alias if we all agree to use PCI flavor. But that
>effort will be small and there is no conclusion to PCI flavor yet.
>       c) SR-IOV NIC support is complex, it will be really helpful if we can
>keep ball rolling and push the all-agreed items forward.
>
>       Considering the big patch list for I-3 release, I'm not optimistic to
>merge this in I release, but as said, we should keep the ball rolling and
>move forward.
>
>Thanks
>--jyh
>
>_______________________________________________
>OpenStack-dev mailing list
>OpenStack-dev at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




------------------------------

Message: 12
Date: Wed, 5 Feb 2014 10:05:45 -0500
From: Doug Hellmann <doug.hellmann at dreamhost.com>
To: Ben Nemec <openstack at nemebean.com>
Cc: "OpenStack Development Mailing List \(not for usage questions\)"
        <openstack-dev at lists.openstack.org>, Sean Dague <sean at dague.net>
Subject: Re: [openstack-dev] olso.config error on running Devstack
Message-ID:
        <CADb+p3ScwwQ53n9dPZQHcwY5VPrR9KOxNJXem+e+rV31TPpigA at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec <openstack at nemebean.com> wrote:

>  On 2014-01-08 12:14, Doug Hellmann wrote:
>
>
>
>
> On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec <openstack at nemebean.com> wrote:
>
>> On 2014-01-08 11:16, Sean Dague wrote:
>>
>>> On 01/08/2014 12:06 PM, Doug Hellmann wrote:
>>> <snip>
>>>
>>>> Yeah, that's what made me start thinking oslo.sphinx should be called
>>>> something else.
>>>>
>>>> Sean, how strongly do you feel about not installing oslo.sphinx in
>>>> devstack? I see your point, I'm just looking for alternatives to the
>>>> hassle of renaming oslo.sphinx.
>>>
>>>
>>> Doing the git thing is definitely not the right thing. But I guess I got
>>> lost somewhere along the way about what the actual problem is. Can
>>> someone write that up concisely? With all the things that have been
>>> tried/failed, why certain things fail, etc.
>>
>>  The problem seems to be when we pip install -e oslo.config on the
>> system, then pip install oslo.sphinx in a venv.  oslo.config is unavailable
>> in the venv, apparently because the namespace package for o.s causes the
>> egg-link for o.c to be ignored.  Pretty much every other combination I've
>> tried (regular pip install of both, or pip install -e of both, regardless
>> of where they are) works fine, but there seem to be other issues with all
>> of the other options we've explored so far.
>>
>> We can't remove the pip install -e of oslo.config because it has to be
>> used for gating, and we can't pip install -e oslo.sphinx because it's not a
>> runtime dep so it doesn't belong in the gate.  Changing the toplevel
>> package for oslo.sphinx was also mentioned, but has obvious drawbacks too.
>>
>> I think that about covers what I know so far.
>
>
>  Here's a link dstufft provided to the pip bug tracking this problem:
> https://github.com/pypa/pip/issues/3
>
> Doug
>
>   This just bit me again trying to run unit tests against a fresh Nova
> tree.    I don't think it's just me either - Matt Riedemann said he has
> been disabling site-packages in tox.ini for local tox runs.  We really need
> to do _something_ about this, even if it's just disabling site-packages by
> default in tox.ini for the affected projects.  A different option would be
> nice, but based on our previous discussion I'm not sure we're going to find
> one.
>
> Thoughts?
>

Is the problem isolated to oslo.sphinx? That is, do we end up with any
configurations where we have 2 oslo libraries installed in different modes
(development and "regular") where one of those 2 libraries is not
oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename
that to move it out of the namespace package.

Doug



>
> -Ben
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/0c553299/attachment-0001.html>

------------------------------

Message: 13
Date: Wed, 5 Feb 2014 17:07:08 +0200
From: Oshrit Feder <OSHRITF at il.ibm.com>
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] about the bp cpu-entitlement
Message-ID:
        <OF1F49B961.EE8B88D9-ONC2257C76.0052B4AB-C2257C76.00531251 at il.ibm.com>
Content-Type: text/plain; charset="us-ascii"

Re: [openstack-dev] about the bp cpu-entitlement

Oshrit Feder
to:
openstack-dev
05/02/2014 03:58 PM




Hi Sahid,

Thank you for your interest in the cpu entitlement feature. As Paul
mentioned, we are joining the extensible resource effort and will
integrate it on top of it. Will be glad to keep you updated on the
progress and will not hesitate to contact you for an extra hand.

Oshrit


-----Original Message-----
From: "Murray, Paul (HP Cloud Services)"
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: about the bp cpu-entitlement

Hi Sahid,

This is being done by Oshrit Feder, so I'll let her answer, but I know
that it is going to be implemented as an extensible resource (see:
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking)
so it is waiting for that to be done. That blueprint is making good
progress now and it should have more patches up this week. There is
another resource example nearly done for network entitlement (see:
https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement)


Paul.

-----Original Message-----
From: sahid [mailto:sahid.ferdjaoui at cloudwatt.com]
Sent: 04 February 2014 09:24
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] about the bp cpu-entitlement

Greetings,

  I saw a really interesting blueprint about cpu entitlement, it will be
targeted for icehouse-3 and I would like to get some details about the
progress?. Does the developer need help? I can give a part of my time on
it.

    https://blueprints.launchpad.net/nova/+spec/cpu-entitlement

Thanks a lot,
s.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/edb11b1e/attachment-0001.html>

------------------------------

Message: 14
Date: Wed, 5 Feb 2014 20:42:03 +0530
From: Abdul Hannan Kanji <hannanabdul55 at gmail.com>
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] update an instance IP address in openstack
Message-ID:
        <CA+=W5P+2xKmr=VH6LNm+vSDDV=Wj+CWYuZ9AfBdD_4QF2NwVpA at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

I am writing a virtualization driver on my own. And i need to change the
instance public IP address in the code? Is there any way I can go about it?
And also, how do I use the nova db package and also add a column into the
nova instance table? is there any way? Any help is highly appreciated.

Regards,

Abdul Hannan Kanji
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/5a99799a/attachment-0001.html>

------------------------------

Message: 15
Date: Wed, 5 Feb 2014 10:20:57 -0500
From: Doug Hellmann <doug.hellmann at dreamhost.com>
To: "OpenStack Development Mailing List (not for usage questions)"
        <openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] pep8 gating fails due to
        tools/config/check_uptodate.sh
Message-ID:
        <CADb+p3TCyZXDww2tXaBdk-3QqjbmwVDsg-rFf9=HSgOxzGarNw at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

On Tue, Feb 4, 2014 at 6:39 PM, Joe Gordon <joe.gordon0 at gmail.com> wrote:

> On Tue, Feb 4, 2014 at 8:19 AM, Sean Dague <sean at dague.net> wrote:
> > On 02/05/2014 12:37 AM, Mark McLoughlin wrote:
> >> On Mon, 2014-01-13 at 16:49 +0000, Sahid Ferdjaoui wrote:
> >>> Hello all,
> >>>
> >>> It looks 100% of the pep8 gate for nova is failing because of a bug
> reported,
> >>> we probably need to mark this as Critical.
> >>>
> >>>    https://bugs.launchpad.net/nova/+bug/1268614
> >>>
> >>> Ivan Melnikov has pushed a patchset waiting for review:
> >>>    https://review.openstack.org/#/c/66346/
> >>>
> >>>
> http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6IEludm9jYXRpb25FcnJvcjogXFwnL2hvbWUvamVua2lucy93b3Jrc3BhY2UvZ2F0ZS1ub3ZhLXBlcDgvdG9vbHMvY29uZmlnL2NoZWNrX3VwdG9kYXRlLnNoXFwnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4OTYzMTQzMzQ4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
> >>
> >> This just came up on #openstack-infra ...
> >>
> >> It's a general problem that is going to occur more frequently.
> >>
> >> Nova now includes config options from keystoneclient and oslo.messaging
> >> in its sample config file.
> >>
> >> That means that as soon as a new option is added to either library, then
> >> check_uptodate.sh will start failing.
> >>
> >> One option discussed is to remove the sample config files from source
> >> control and have the sample be generated at build/packaging time.
> >>
> >> So long as we minimize the dependencies required to generate the sample
> >> file, this should be manageable.
> >
> > The one big drawback here is that today you can point people to a git
> > url, and they will then have a sample config file for Nova (or Tempest
> > or whatever you are pointing them at). If this is removed, then we'll
> > need / want some other way to make those samples easily available on the
> > web, not only at release time.
>
> +1, to the idea of removing this auto-generated file from the repo.
>
> How about publishing these as part of the docs, we can put them in the
> dev docs, so the nova options get published at:
>
> http://docs.openstack.org/developer/nova/
>
> etc, or we can make sure the main docs are always updated etc.
>

I just talked with Anne, and she said the doc build now includes a
Configuration Reference which is extracting the options and building nicely
formatted tables. Given that, I don't think it adds much to include the
config files as well.

Including the config file in either the developer documentation or the
packaging build makes more sense. I'm still worried that adding it to the
sdist generation means you would have to have a lot of tools installed just
to make the sdist. However, we could include a script with each app that
will generate the sample file for that app. Anyone installing from source
could run it to build their own file, and the distro packagers could run it
as part of their build and include the output in their package.

Doug



>
> >
> > On a related point, It's slightly bothered me that we're allow libraries
> > to define stanzas in our config files. It seems like a leaky abstraction
> > that's only going to get worse over time as we graduate more of oslo,
> > and the coupling gets even worse.
> >
> > Has anyone considered if it's possible to stop doing that, and have the
> > libraries only provide an object model that takes args and instead leave
> > config declaration to the instantiation points for those objects?
> > Because having a nova.conf file that's 30% options coming from
> > underlying libraries that are not really controlable in nova seems like
> > a recipe for a headache. We already have a bunch of that issue today
> > with changing 3rd party logging libraries in oslo, that mostly means to
> > do that in nova we first just go and change the incubator, then sync the
> > changes back.
> >
> > I do realize this would be a rather substantial shift from current
> > approach, but current approach seems to be hitting a new complexity
> > point that we're only just starting to feel the pain on.
> >
> >         -Sean
> >
> > --
> > Sean Dague
> > Samsung Research America
> > sean at dague.net / sean.dague at samsung.com
> > http://dague.net
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140205/5f26abf6/attachment.html>

------------------------------

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


End of OpenStack-dev Digest, Vol 22, Issue 13
*********************************************



More information about the OpenStack-dev mailing list