[openstack-dev] Hierarchicical Multitenancy Discussion

Vishvananda Ishaya vishvananda at gmail.com
Tue Feb 4 18:47:57 UTC 2014


On Feb 4, 2014, at 2:16 AM, Vinod Kumar Boppanna <vinod.kumar.boppanna at cern.ch> wrote:

> Dear Vishy,
> 
> I want to mention few points:
> 
> 1. In this hierarchical design, the domains should go and everything should be a nested projects and users added to a project (i guess domain can be called as something like MainProject -> an example).
> This also implies that user does not need to adhere to a domain during registration to the openstack. The user will simply have a login name and a password. The user scope is determined according to his membership and role in a project.

I agree with you here but this is for keystone to decide. I understand some people feel domains are necessary for providing separate identity management backends.

> 2. I agree with you that this requires the change in database schema (and ofcourse code change) in keystone and in all the other openstack services (which i guess is a big task)

It isn’t as hard as I thought. See my prototype on the thread.

> 3. I think, a separate context field (instead of using project_id) can be used to return the scope by keystone (user authentication scope)

I agree

> 4. It is tricky to construct the API URLs. Imagine there are 10 levels in the hierarchy and the user got authenticated (scope) at the first level. Then how the URL should be like for lets say listing some data of a project at level 10. How the same URL should look if the user authenticates at level 9. 

scope should not go into the url. For apis that still stick the project id in the url, it should continue to use the bottom level project.

> 5. Imagine a following scenario,
>    org has two projects org.projectA and org.projectB 
>    and user is added two both the projects "org.projectA" and "org.projectB" with different roles (lets say one as project admin and the other as member)
> 
>    Now, if user authenticates at level "org", what should the keystone return in the context about the roles and scope etc? Because the user has different roles and the user should not be allowed to do update/delete/add actions to a project in which he has only member role.

Correct the roles will depend on the scope

> 
> 6. During the delegation of rights, if a user is given the role of admin at say level 1, that should mean that he possess all the rights to manage the objects beneath level 1, irrespective any other additional roles added to him at any other level. 

Correct

> 
> 7. The one big question is, how can nova know the hierarchy as this information is saved in the keystone.
> 
>    For eg: We have a project named "org.projectA.projectA1.projectA11" and lets say i want to define a quota for "projectA11" and so should i store this quota in nova for this project using project ID as "projectA11"
>           (or) full ID "org.projectA.projectA1.projectA11".  If full name is not stored in nova, then lets the same project name appear as "org.projectB.projectB1.projectA11".
>            If i store the quota for "projectA11", which one does that mean i.e either "org.projectA.projectA1.projectA11" or "org.projectB.projectB1.projectA11". If user authenticates at "org.projectA" and requests to list
>           the  quota for "projectA11", how can nova check that the user has permission or not.

Quotas will have to store the full patch to the project like everything else.

Vish

> 
> I still feel achieving the Hierarchical Multitenancy is not an easy one (especially other services and keystone also needs changes). In my opinion, the simplest way may be using only three levels  i.e domain -> projects -> users and make use of RBAC rules to delegate the rights for managing the domain or project to users. 
> 
> 
> Cheers,
> Vinod Kumar Boppanna
> 
> 
> ________________________________________
> From: openstack-dev-request at lists.openstack.org [openstack-dev-request at lists.openstack.org]
> Sent: 04 February 2014 04:09
> To: openstack-dev at lists.openstack.org
> Subject: OpenStack-dev Digest, Vol 22, Issue 6
> 
> Send OpenStack-dev mailing list submissions to
>        openstack-dev at lists.openstack.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> or, via email, send a message with subject or body 'help' to
>        openstack-dev-request at lists.openstack.org
> 
> You can reach the person managing the list at
>        openstack-dev-owner at lists.openstack.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-dev digest..."
> 
> 
> Today's Topics:
> 
>   1. Re: Barbican Incubation Review (Joe Gordon)
>   2. transactions in openstack REST API? (Chris Friesen)
>   3. Re: [glance][nova]improvement-of-accessing-to-glance (Jay Pipes)
>   4. Re: [Nova][Scheduler] Policy Based Scheduler and Solver
>      Scheduler (Chris Friesen)
>   5. Re: transactions in openstack REST API? (Andrew Laski)
>   6. Re: [Nova][Scheduler] Policy Based Scheduler and Solver
>      Scheduler (Yathiraj Udupi (yudupi))
>   7. Ugly Hack to deal with multiple versions (Adam Young)
>   8. Re: [Heat] [TripleO] Rolling updates spec re-written. RFC
>      (Clint Byrum)
>   9. Re: Python 3 compatibility (Chmouel Boudjnah)
>  10. Re: [nova][neutron] PCI pass-through SRIOV extra hr of
>      discussion today (Sandhya Dasu (sadasu))
>  11.  [Ironic][Ceilometer]bp:send-data-to-ceilometer (Hsu, Wan-Yen)
>  12. Re: transactions in openstack REST API? (Chris Friesen)
>  13. Re: [Heat] [TripleO] Rolling updates spec re-written.     RFC
>      (Thomas Herve)
>  14. Cinder + taskflow (Joshua Harlow)
>  15. Re: [Nova] do nova objects work for plugins? (Dan Smith)
>  16. Re: [Solum] Solum database schema modification proposal
>      (Angus Salkeld)
>  17. Re: [Heat] [TripleO] Rolling updates spec re-written. RFC
>      (Christopher Armstrong)
>  18. Re: [nova][neutron] PCI pass-through SRIOV extra hr of
>      discussion today (Irena Berezovsky)
>  19. Re: Cinder + taskflow (John Griffith)
>  20. [keystone][nova] Re: Hierarchicical Multitenancy  Discussion
>      (Vishvananda Ishaya)
>  21. Re: [Heat] [TripleO] Rolling updates spec re-written. RFC
>      (Clint Byrum)
>  22. [savanna] Specific job type for streaming mapreduce? (and
>      someday pipes) (Trevor McKay)
>  23. [Neutron] Interest in discussing vendor plugins for       L3
>      services? (Paul Michali)
>  24. Re: [Neutron] Assigning a floating IP to an       internal network
>      (Carl Baldwin)
>  25. Re: [Neutron] Interest in discussing vendor plugins for L3
>      services? (Hemanth Ravi)
>  26. Re: [savanna] Specific job type for streaming mapreduce? (and
>      someday pipes) (Andrew Lazarev)
>  27. Re: Ugly Hack to deal with multiple versions (Dean Troyer)
>  28. Re: Ugly Hack to deal with multiple versions (Christopher Yeoh)
>  29.  [Neutron] Adding package to requirements.txt (Hemanth Ravi)
>  30. Re: [Ironic] PXE driver deploy issues (Devananda van der Veen)
>  31. [nova] bp proposal: configurable locked vm api (Jae Sang Lee)
>  32. Re: Cinder + taskflow (Joshua Harlow)
>  33. Re: [Nova] Putting nova-network support into the V3       API
>      (Christopher Yeoh)
>  34. Re: [Nova] Putting nova-network support into the V3       API
>      (Joe Gordon)
>  35. Re: Nova style cleanups with associated hacking   check
>      addition (Joe Gordon)
>  36. Re: [Neutron] Adding package to requirements.txt (Mark McClain)
>  37. [Neutron] Developer documentation - linking to    slideshares?
>      (Collins, Sean)
>  38. Re: [nova] bp proposal: configurable locked vm api
>      (Russell Bryant)
>  39. Re: [Neutron] Adding package to requirements.txt (Hemanth Ravi)
>  40. [Neutron][IPv6] Agenda for Feb 4 - 1400 UTC - in
>      #openstack-meeting (Collins, Sean)
>  41. [Murano] Community meeting agenda - 02/04/2014
>      (Alexander Tivelkov)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Mon, 3 Feb 2014 11:00:33 -0800
> From: Joe Gordon <joe.gordon0 at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Barbican Incubation Review
> Message-ID:
>        <CAHXdxOepRTCsCHivAM=3zTYMv55-dwabk8X0dqb+EfsjBW8yVw at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On Wed, Jan 29, 2014 at 3:28 PM, Justin Santa Barbara
> <justin at fathomdb.com> wrote:
>> Jarret Raim  wrote:
>> 
>>>> I'm presuming that this is our last opportunity for API review - if
>>>> this isn't the right occasion to bring this up, ignore me!
> 
> Apparently you are right:
> 
> For incubation
> 
> 'Project APIs should be reasonably stable'
> 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements#n23
> 
> And there is nothing about APIs in graduation.
> 
> 
>>> 
>>> I wouldn't agree here. The barbican API will be evolving over time as we
>>> add new functionality. We will, of course, have to deal with backwards
>>> compatibility and version as we do so.
>> 
>> I suggest that writing bindings for every major language, maintaining
>> them through API revisions, and dealing with all the software that
>> depends on your service is a much bigger undertaking than e.g. writing
>> Barbican itself ;-)  So it seems much more efficient to get v1 closer
>> to right.
>> 
>> I don't think this need turn into a huge upfront design project
>> either; I'd just like to see the TC approve your project with an API
>> that the PTLs have signed off on as meeting their known needs, rather
>> than one that we know will need changes.  Better to delay take-off
>> than commit ourselves to rebuilding the engine in mid-flight.
>> 
>> We don't need the functionality to be implemented in your first
>> release, but the API should allow the known upcoming changes.
>> 
>>> We're also looking at adopting the
>>> model that Keystone uses for API blueprints where the API changes are
>>> separate blueprints that are reviewed by a larger group than the
>>> implementations.
>> 
>> I think you should aspire to something greater than the adoption of Keystone V3.
>> 
>> I'm sorry to pick on your project - I think it is much more important
>> to OpenStack than many others, though that's a big part of why it is
>> important to avoid API churn.  The instability of our APIs is a huge
>> barrier to OpenStack adoption.  I'd love to see the TC review all
>> breaking API changes, but I don't think we're set up that way.
>> 
>> Justin
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Mon, 3 Feb 2014 13:10:21 -0600
> From: Chris Friesen <chris.friesen at windriver.com>
> To: OpenStack Development Mailing List
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] transactions in openstack REST API?
> Message-ID: <52EFE99D.4040802 at windriver.com>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
> 
> 
> Has anyone ever considered adding the concept of transaction IDs to the
> openstack REST API?
> 
> I'm envisioning a way to handle long-running transactions more cleanly.
>  For example:
> 
> 1) A user sends a request to live-migrate an instance
> 2) Openstack acks the request and includes a "transaction ID" in the
> response.
> 3) The user can then poll (or maybe listen to notifications) to see
> whether the transaction is complete or hit an error.
> 
> I view this as most useful for things that could potentially take a long
> time to finish--instance creation/deletion/migration/evacuation are
> obvious, I'm sure there are others.
> 
> Also, anywhere that we use a "cast" RPC call we'd want to add that call
> to a list associated with that transaction in the database...that way
> the transaction is only complete when all the sub-jobs are complete.
> 
> I've seen some discussion about using transaction IDs to locate logs
> corresponding to a given transaction, but nothing about the end user
> being able to query the status of the transaction.
> 
> Chris
> 
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Mon, 03 Feb 2014 14:14:28 -0500
> From: Jay Pipes <jaypipes at gmail.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev]
>        [glance][nova]improvement-of-accessing-to-glance
> Message-ID: <1391454868.21537.36.camel at cranky>
> Content-Type: text/plain; charset="UTF-8"
> 
> On Mon, 2014-02-03 at 10:59 -0800, Mark Washenberger wrote:
> 
>> On Mon, Feb 3, 2014 at 7:13 AM, Jay Pipes <jaypipes at gmail.com> wrote:
>>        On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:
>>> IMHO, the bit that should really be optimized is the
>>        selection of the
>>> store nodes where the image should be downloaded from. That
>>        is,
>>> selecting the nearest location from the image locations and
>>        this is
>>> something that perhaps should happen in glance-api, not
>>        nova.
>> 
>> 
>>        I disagree. The reason is because glance-api does not know
>>        where nova
>>        is. Nova does.
>> 
>>        I continue to think that the best performance gains will come
>>        from
>>        getting rid of glance-api entirely, putting the
>>        block-streaming bits
>>        into a separate Python library, and having Nova and Cinder
>>        pull
>>        image/volume bits directly from backend storage instead of
>>        going through
>>        the glance middleman.
>> 
>> 
>> When you say get rid of glance-api, do you mean the glance server
>> project? or glance-api as opposed to glance-registry?
> 
> I mean the latter.
> 
>> If its the latter, I think we're basically in agreement. However,
>> there may be a little bit of a terminology distinction that is
>> important. Here is the plan that is currently underway:
>> 
>> 1) Deprecate the registry deployment (done when v1 is deprecated)
>> 2) v2 glance api talks directly to the underlying database (done)
>> 3) Create a library in the images program that allows OpenStack
>> projects to share code for reading image data remotely and picking
>> optimal paths for bulk data transfer (In progress under the
>> "glance.store" title)
>> 4) v2 exposes locations that clients can directly access (partially
>> done, continues to need a lot of improvement)
>> 5) v2 still allows downloading images from the glance server as a
>> compatibility and lowest-common-denominator feature
> 
> All good.
> 
>> In 4, some work is complete, and some more is planned, but we still
>> need some more planning and design to figure out how to support
>> directly downloading images in a secure and general way.
> 
> Sounds good to me :)
> 
> Best,
> -jay
> 
> 
> 
> 
> 
> ------------------------------
> 
> Message: 4
> Date: Mon, 3 Feb 2014 13:18:48 -0600
> From: Chris Friesen <chris.friesen at windriver.com>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler
>        and Solver Scheduler
> Message-ID: <52EFEB98.1080007 at windriver.com>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
> 
> On 02/03/2014 12:28 PM, Khanh-Toan Tran wrote:
> 
>> Another though would be the need for Instance Group API [1].
>> Currently users can only request multiple instances of the same
>> flavors. These requests do not need LP to solve, just placing
>> instances one by one is sufficient. Therefore we need this API so
>> that users can request instances of different flavors, with some
>> relations (constraints) among them. The advantage is that this logic
>> and API will help us add Cinder volumes with ease (not sure how the
>> Cinder-stackers think about it, though).
> 
> I don't think that the instance group API actually helps here.  (I think
> it's a good idea, just not directly related to this.)
> 
> I think what we really want is the ability to specify an arbitrary list
> of instances (or other things) that you want to schedule, each of which
> may have different image/flavor, each of which may be part of an
> instance group, a specific network, have metadata which associates with
> a host aggregate, desire specific PCI passthrough devices, etc.
> 
> An immediate user of something like this would be heat, since it would
> let them pass the whole stack to the scheduler in one API call.  The
> scheduler could then take a more holistic view, possibly doing a better
> fitting job than if the instances are scheduled one-at-a-time.
> 
> Chris
> 
> 
> 
> ------------------------------
> 
> Message: 5
> Date: Mon, 3 Feb 2014 14:31:31 -0500
> From: Andrew Laski <andrew.laski at rackspace.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] transactions in openstack REST API?
> Message-ID: <20140203193131.GL2672 at crypt>
> Content-Type: text/plain; charset=us-ascii; format=flowed
> 
> On 02/03/14 at 01:10pm, Chris Friesen wrote:
>> 
>> Has anyone ever considered adding the concept of transaction IDs to
>> the openstack REST API?
>> 
>> I'm envisioning a way to handle long-running transactions more
>> cleanly.  For example:
>> 
>> 1) A user sends a request to live-migrate an instance
>> 2) Openstack acks the request and includes a "transaction ID" in the
>> response.
>> 3) The user can then poll (or maybe listen to notifications) to see
>> whether the transaction is complete or hit an error.
> 
> I've called them tasks, but I have a proposal up at
> https://blueprints.launchpad.net/nova/+spec/instance-tasks-api that is
> very similar to this.  It allows for polling, but doesn't get into
> notifications.  But this is a first step in this direction and it can be
> expanded upon later.
> 
> Please let me know if this covers what you've brought up, and add any
> feedback you may have to the blueprint.
> 
>> 
>> I view this as most useful for things that could potentially take a
>> long time to finish--instance creation/deletion/migration/evacuation
>> are obvious, I'm sure there are others.
>> 
>> Also, anywhere that we use a "cast" RPC call we'd want to add that
>> call to a list associated with that transaction in the
>> database...that way the transaction is only complete when all the
>> sub-jobs are complete.
>> 
>> I've seen some discussion about using transaction IDs to locate logs
>> corresponding to a given transaction, but nothing about the end user
>> being able to query the status of the transaction.
>> 
>> Chris
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ------------------------------
> 
> Message: 6
> Date: Mon, 3 Feb 2014 19:38:27 +0000
> From: "Yathiraj Udupi (yudupi)" <yudupi at cisco.com>
> To: "chris.friesen at windriver.com" <chris.friesen at windriver.com>,
>        "openstack-dev at lists.openstack.org"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler
>        and Solver Scheduler
> Message-ID: <000f427c.426efccd0713d2d6 at cisco.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> The solver-scheduler is designed to solve for an arbitrary list of instances of different flavors. We need to have some updated apis in the scheduler to be able to pass on such requests. Instance group api is an initial effort to specify such groups.
> 
> 
> 
> Even now the existing solver scheduler patch,  works for a group request,  only that it is a group of a single flavor. It still solves once for the entire group based on the constraints on available capacity.
> 
> 
> 
> With updates to the api that call the solver scheduler we can easily demonstrate how an arbitrary group of VM request can be satisfied and solved together in a single constraint solver run. (LP based solver for now in the current patch, But can be any constraint solver)
> 
> 
> 
> Thanks,
> 
> Yathi.
> 
> 
> 
> 
> 
> ------ Original message------
> 
> From: Chris Friesen
> 
> Date: Mon, 2/3/2014 11:24 AM
> 
> To: openstack-dev at lists.openstack.org;
> 
> Subject:Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler
> 
> 
> 
> On 02/03/2014 12:28 PM, Khanh-Toan Tran wrote:
> 
>> Another though would be the need for Instance Group API [1].
>> Currently users can only request multiple instances of the same
>> flavors. These requests do not need LP to solve, just placing
>> instances one by one is sufficient. Therefore we need this API so
>> that users can request instances of different flavors, with some
>> relations (constraints) among them. The advantage is that this logic
>> and API will help us add Cinder volumes with ease (not sure how the
>> Cinder-stackers think about it, though).
> 
> I don't think that the instance group API actually helps here.  (I think
> it's a good idea, just not directly related to this.)
> 
> I think what we really want is the ability to specify an arbitrary list
> of instances (or other things) that you want to schedule, each of which
> may have different image/flavor, each of which may be part of an
> instance group, a specific network, have metadata which associates with
> a host aggregate, desire specific PCI passthrough devices, etc.
> 
> An immediate user of something like this would be heat, since it would
> let them pass the whole stack to the scheduler in one API call.  The
> scheduler could then take a more holistic view, possibly doing a better
> fitting job than if the instances are scheduled one-at-a-time.
> 
> Chris
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/6f85f613/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 7
> Date: Mon, 03 Feb 2014 14:50:39 -0500
> From: Adam Young <ayoung at redhat.com>
> To: OpenStack Development Mailing List
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] Ugly Hack to deal with multiple versions
> Message-ID: <52EFF30F.1020305 at redhat.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> We have to support old clients.
> Old clients expect that the URL that comes back for the service catalog
> has the version in it.
> Old clients don't do version negotiation.
> 
> Thus, we need an approach to not-break old clients while we politely
> encourage the rest of the world to move to later APIs.
> 
> 
> I know Keystone has this problem.  I've heard that some of the other
> services do as well.  Here is what I propose.  It is ugly, but it is a
> transition plan, and can be disabled once the old clients are deprecated:
> 
> HACK:  In a new client, look at the URL.  If it ends with /v2.0, chop it
> off and us the substring up to that point.
> 
> Now, at this point you are probably going:  That is ugly, is it really
> necessary?  Can't we do something more correct?
> 
> No.  I mean, we are already doing something more correct in that the
> later versions of the Keystone client already support version
> discovery.  The problem is that older clients don't.
> 
> Alternatives:
> 
> 1.  Just chop the url.  Now only clients smart enough to do negotiation
> work. Older clients no longer work.  Suck it up.
> 2.  Put multiple endpoints in the service catalog.  Ugh.  Now we've just
> doubled the size of the service catalog, and we need new logic to get
> the identityv3 endpoints, cuz we need to leave "identity" endpoints for
> existing clients.
> 3. Do some sort of magic on the server side to figure out the right URL
> to respond to the client request.  This kind of magic was banned under
> the wizarding convention of '89.
> 
> 
> Can we accept that this is necessary, and vow to never let this happen
> again by removing the versions from the URLs after the current set of
> clients are deprecated?
> 
> 
> 
> 
> 
> ------------------------------
> 
> Message: 8
> Date: Mon, 03 Feb 2014 11:51:05 -0800
> From: Clint Byrum <clint at fewbar.com>
> To: openstack-dev <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec
>        re-written. RFC
> Message-ID: <1391456482-sup-7885 at fewbar.com>
> Content-Type: text/plain; charset=UTF-8
> 
> Excerpts from Robert Collins's message of 2014-02-03 10:47:06 -0800:
>> Quick thoughts:
>> 
>> - I'd like to be able to express a minimum service percentage: e.g. I
>> know I need 80% of my capacity available at anyone time, so an
>> additional constraint to the unit counts, is to stay below 20% down at
>> a time (and this implies that if 20% have failed, either stop or spin
>> up more nodes before continuing).
>> 
> 
> Right will add that.
> 
> One thing though, all failures lead to rollback. I put that in the
> 'Unresolved issues' section. Continuing a group operation with any
> failures is an entirely different change to Heat. We have a few choices,
> from a whole re-thinking of how we handle failures, to just a special
> type of resource group that tolerates failure percentages.
> 
>> The wait condition stuff seems to be conflating in the 'graceful
>> operations' stuff we discussed briefly at the summit, which in my head
>> at least is an entirely different thing - it's per node rather than
>> per group. If done separately that might make each feature
>> substantially easier to reason about.
> 
> Agreed. I think something more generic than an actual Heat wait condition
> would make more sense. Perhaps even returning all of the active scheduler
> tasks which the update must wait on would make sense. Then in the
> "graceful update" version we can just make the dynamically created wait
> conditions depend on the update pattern, which would have the same effect.
> 
> With the "maximum out of service" addition, we'll also need to make sure
> that upon the "must wait for these" things completing we evaluate state
> again before letting the update proceed.
> 
> 
> 
> ------------------------------
> 
> Message: 9
> Date: Mon, 3 Feb 2014 21:02:58 +0100
> From: Chmouel Boudjnah <chmouel at enovance.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>,  Sylvain Bauza
>        <sylvain.bauza at gmail.com>
> Subject: Re: [openstack-dev] Python 3 compatibility
> Message-ID:
>        <CAPeWyqwa6BmcpaVKy65poJXkV9SpkX-OFOeT=6sZ9AosS-w47w at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Mon, Feb 3, 2014 at 5:29 PM, Julien Danjou <julien at danjou.info> wrote:
> 
>> Last, but not least, trollius has been created by Victor Stinner, who
>> actually did that work with porting OpenStack in mind and as the first
>> objective.
>> 
> 
> 
> AFAIK: victor had plans to send a mail about it to the list later this week.
> 
> Chmouel.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/378a4cbd/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 10
> Date: Mon, 3 Feb 2014 20:14:08 +0000
> From: "Sandhya Dasu (sadasu)" <sadasu at cisco.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>, Irena Berezovsky
>        <irenab at mellanox.com>, "Robert Li (baoli)" <baoli at cisco.com>, Robert
>        Kukura <rkukura at redhat.com>, "Brian Bowen (brbowen)"
>        <brbowen at cisco.com>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
>        extra hr of discussion today
> Message-ID: <CF156275.10B37%sadasu at cisco.com>
> Content-Type: text/plain; charset="windows-1252"
> 
> Hi,
>    Since, openstack-meeting-alt seems to be in use, baoli and myself are moving to openstack-meeting. Hopefully, Bob Kukura & Irena can join soon.
> 
> Thanks,
> Sandhya
> 
> From: Sandhya Dasu <sadasu at cisco.com<mailto:sadasu at cisco.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
> Date: Monday, February 3, 2014 1:26 PM
> To: Irena Berezovsky <irenab at mellanox.com<mailto:irenab at mellanox.com>>, "Robert Li (baoli)" <baoli at cisco.com<mailto:baoli at cisco.com>>, Robert Kukura <rkukura at redhat.com<mailto:rkukura at redhat.com>>, "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)" <brbowen at cisco.com<mailto:brbowen at cisco.com>>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of discussion today
> 
> Hi all,
>    Both openstack-meeting and openstack-meeting-alt are available today. Lets meet at UTC 2000 @ openstack-meeting-alt.
> 
> Thanks,
> Sandhya
> 
> From: Irena Berezovsky <irenab at mellanox.com<mailto:irenab at mellanox.com>>
> Date: Monday, February 3, 2014 12:52 AM
> To: Sandhya Dasu <sadasu at cisco.com<mailto:sadasu at cisco.com>>, "Robert Li (baoli)" <baoli at cisco.com<mailto:baoli at cisco.com>>, Robert Kukura <rkukura at redhat.com<mailto:rkukura at redhat.com>>, "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)" <brbowen at cisco.com<mailto:brbowen at cisco.com>>
> Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Hi Sandhya,
> Can you please elaborate how do you suggest to extend the below bp for SRIOV Ports managed by different Mechanism Driver?
> I am not biased to any specific direction here, just think we need common layer for managing SRIOV port at neutron, since there is a common pass between nova and neutron.
> 
> BR,
> Irena
> 
> 
> From: Sandhya Dasu (sadasu) [mailto:sadasu at cisco.com]
> Sent: Friday, January 31, 2014 6:46 PM
> To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development Mailing List (not for usage questions); Brian Bowen (brbowen)
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Hi Irena,
>      I was initially looking at https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info to take care of the extra information required to set up the SR-IOV port. When the scope of the BP was being decided, we had very little info about our own design so I didn't give any feedback about SR-IOV ports. But, I feel that this is the direction we should be going. Maybe we should target this in Juno.
> 
> Introducing, SRIOVPortProfileMixin would be creating yet another way to take care of extra port config. Let me know what you think.
> 
> Thanks,
> Sandhya
> 
> From: Irena Berezovsky <irenab at mellanox.com<mailto:irenab at mellanox.com>>
> Date: Thursday, January 30, 2014 4:13 PM
> To: "Robert Li (baoli)" <baoli at cisco.com<mailto:baoli at cisco.com>>, Robert Kukura <rkukura at redhat.com<mailto:rkukura at redhat.com>>, Sandhya Dasu <sadasu at cisco.com<mailto:sadasu at cisco.com>>, "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)" <brbowen at cisco.com<mailto:brbowen at cisco.com>>
> Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Robert,
> Thank you very much for the summary.
> Please, see inline
> 
> From: Robert Li (baoli) [mailto:baoli at cisco.com]
> Sent: Thursday, January 30, 2014 10:45 PM
> To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack Development Mailing List (not for usage questions); Brian Bowen (brbowen)
> Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Hi,
> 
> We made a lot of progress today. We agreed that:
> -- vnic_type will be a top level attribute as binding:vnic_type
> -- BPs:
>     * Irena's https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for binding:vnic_type
>     * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be encapsulated in binding:profile
>     * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info will be encapsulated in binding:vif_details, which may include other information like security parameters. For SRIOV, vlan_id and profileid are candidates.
> -- new arguments for port-create will be implicit arguments. Future release may make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
> I think that currently we can make do without the profileid as an input parameter from the user. The mechanism driver will return a profileid in the vif output.
> 
> Please correct any misstatement in above.
> 
> Issues:
>  -- do we need a common utils/driver for SRIOV generic parts to be used by individual Mechanism drivers that support SRIOV? More details on what would be included in this sriov utils/driver? I'm thinking that a candidate would be the helper functions to interpret the pci_slot, which is proposed as a string. Anything else in your mind?
> [IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV port related attributes
> 
>  -- what should mechanism drivers put in binding:vif_details and how nova would use this information? as far as I see it from the code, a VIF object is created and populated based on information provided by neutron (from get network and get port)
> 
> Questions:
>  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, binding:vnic_type will not be set, I guess. Then would it be treated as a virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
> [IrenaB] vnic_type will be added as an additional attribute to binding extension. For persistency it should be added in PortBindingMixin for non ML2. I didn?t think to cover it as part of ML2 vnic_type bp.
> For the rest attributes, need to see what Bob plans.
> 
> -- is a neutron agent making decision based on the binding:vif_type?  In that case, it makes sense for binding:vnic_type not to be exposed to agents.
> [IrenaB] vnic_type is input parameter that will eventually cause certain vif_type to be sent to GenericVIFDriver and create network interface. Neutron agents periodically scan for attached interfaces. For example, OVS agent will look only for OVS interfaces, so if SRIOV interface is created, it won?t be discovered by OVS agent.
> 
> Thanks,
> Robert
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/0e35e601/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 11
> Date: Mon, 3 Feb 2014 20:30:59 +0000
> From: "Hsu, Wan-Yen" <wan-yen.hsu at hp.com>
> To: "openstack-dev at lists.openstack.org"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev]
>        [Ironic][Ceilometer]bp:send-data-to-ceilometer
> Message-ID:
>        <6F242FB09150F6468950B82E57B38C021675903B at G4W3217.americas.hpqcorp.net>
> 
> Content-Type: text/plain; charset="us-ascii"
> 
> 
> Hi,
> 
>  I sent this message on 01/31 but I did not see it got posted on the mailing list.  So, I am sending it again...
>  Given that different hardwares will expose differnt sensors, I am hoping that we will have a flexible and extensible interface and data structures to accomodate different hardwares.   For instance, some hardware can report additional power and thermal information (such as average power wattage, critical upperthreshold of temperature, ...etc)  than basic current/min/max wattages and temperature.  Some hardwre exposes NICs and storage sensors as well.   IMO. solution2 gives more flexibility to accomodate more sensors.   If there is a desire to define a set of common sensors such as power, fan, and thermal...etc  as proposed by solution1, then I think we will need an additional data structure such as extra_sensors with key and value pair to allow hardwares to report additional sensors.  Thanks!
> Regards,
> Wanyen
> 
>>>    Meter Names:
>>>        fanspeed, fanspeed.min, fanspeed.max, fanspeed.status
>>>        voltage, voltage.min, voltage.max, voltage.status
>>>        temperature, temperature.min, temperature.max, temperature.status
>>> 
>>>                'FAN 1': {
>>>                    'current_value': '4652',
>>>                    'min_value': '4200',
>>>                    'max_value': '4693',
>>>                    'status': 'ok'
>>>                }
>>>                'FAN 2': {
>>>                    'current_value': '4322',
>>>                    'min_value': '4210',
>>>                    'max_value': '4593',
>>>                    'status': 'ok'
>>>            },
>>>            'voltage': {
>>>                'Vcore': {
>>>                    'current_value': '0.81',
>>>                    'min_value': '0.80',
>>>                    'max_value': '0.85',
>>>                    'status': 'ok'
>>>                },
>>>                '3.3VCC': {
>>>                    'current_value': '3.36',
>>>                    'min_value': '3.20',
>>>                    'max_value': '3.56',
>>>                    'status': 'ok'
>>>                },
>>>            ...
>>>        }
>>>    }
>> 
>> 
>> are FAN 1, FAN 2, Vcore, etc... variable names or values that would
>> consistently show up? if the former, would it make sense to have the meters
>> be similar to fanspeed:<trait> where trait is FAN1, FAN2, etc...? if the
>> meter is just fanspeed, what would the volume be? FAN 1's current_value?
>> 
> 
> Different hardware will expose different number of each of these things. In
> Haomeng's first proposal, all hardware would expose a "fanspeed" and a
> "voltage" category, but with a variable number of meters in each category.
> In the second proposal, it looks like there are no categories and hardware
> exposes a variable number of meters whose names adhere to some consistent
> structure (eg, "FAN ?" and "V???").
> 
> It looks to me like the question is whether or not to use categories to
> group similar meters.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/af98491a/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 12
> Date: Mon, 3 Feb 2014 14:45:00 -0600
> From: Chris Friesen <chris.friesen at windriver.com>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] transactions in openstack REST API?
> Message-ID: <52EFFFCC.60808 at windriver.com>
> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
> 
> On 02/03/2014 01:31 PM, Andrew Laski wrote:
>> On 02/03/14 at 01:10pm, Chris Friesen wrote:
>>> 
>>> Has anyone ever considered adding the concept of transaction IDs to
>>> the openstack REST API?
>>> 
>>> I'm envisioning a way to handle long-running transactions more
>>> cleanly.  For example:
>>> 
>>> 1) A user sends a request to live-migrate an instance
>>> 2) Openstack acks the request and includes a "transaction ID" in the
>>> response.
>>> 3) The user can then poll (or maybe listen to notifications) to see
>>> whether the transaction is complete or hit an error.
>> 
>> I've called them tasks, but I have a proposal up at
>> https://blueprints.launchpad.net/nova/+spec/instance-tasks-api that is
>> very similar to this.  It allows for polling, but doesn't get into
>> notifications.  But this is a first step in this direction and it can be
>> expanded upon later.
>> 
>> Please let me know if this covers what you've brought up, and add any
>> feedback you may have to the blueprint.
> 
> 
> That actually looks really good.  I like the idea of subtasks for things
> like live migration.
> 
> The only real comment I have at this point is that you might want to
> talk to the "transaction ID" guys and maybe use your task UUID as the
> transaction ID that gets passed to other services acting on behalf of nova.
> 
> Chris
> 
> 
> 
> ------------------------------
> 
> Message: 13
> Date: Mon, 3 Feb 2014 21:46:05 +0100 (CET)
> From: Thomas Herve <thomas.herve at enovance.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec
>        re-written.     RFC
> Message-ID:
>        <966620715.789074.1391460365842.JavaMail.zimbra at enovance.com>
> Content-Type: text/plain; charset=utf-8
> 
>> So, I wrote the original rolling updates spec about a year ago, and the
>> time has come to get serious about implementation. I went through it and
>> basically rewrote the entire thing to reflect the knowledge I have
>> gained from a year of working with Heat.
>> 
>> Any and all comments are welcome. I intend to start implementation very
>> soon, as this is an important component of the HA story for TripleO:
>> 
>> https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates
> 
> Hi Clint, thanks for pushing this.
> 
> First, I don't think RollingUpdatePattern and CanaryUpdatePattern should be 2 different entities. The second just looks like a parametrization of the first (growth_factor=1?).
> 
> I then feel that using (abusing?) depends_on for update pattern is a bit weird. Maybe I'm influenced by the CFN design, but the separate UpdatePolicy attribute feels better (although I would probably use a property). I guess my main question is around the meaning of using the update pattern on a server instance. I think I see what you want to do for the group, where child_updating would return a number, but I have no idea what it means for a single resource. Could you detail the operation a bit more in the document?
> 
> It also seems that the interface you're creating (child_creating/child_updating) is fairly specific to your use case. For autoscaling we have a need for more generic notification system, it would be nice to find common grounds. Maybe we can invert the relationship? Add a "notified_resources" attribute, which would call hooks on the "parent" when actions are happening.
> 
> Thanks,
> 
> --
> Thomas
> 
> 
> 
> ------------------------------
> 
> Message: 14
> Date: Mon, 3 Feb 2014 20:53:11 +0000
> From: Joshua Harlow <harlowja at yahoo-inc.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] Cinder + taskflow
> Message-ID: <CF1541B3.55175%harlowja at yahoo-inc.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> Hi all,
> 
> After talking with john g. about taskflow in cinder and seeing more and
> more reviews showing up I wanted to start a thread to gather all our
> lessons learned and how we can improve a little before continuing to add
> too many more refactoring and more reviews (making sure everyone is
> understands the larger goal and larger picture of switching pieces of
> cinder - piece by piece - to taskflow).
> 
> Just to catch everyone up.
> 
> Taskflow started integrating with cinder in havana and there has been some
> continued work around these changes:
> 
> - https://review.openstack.org/#/c/58724/
> - https://review.openstack.org/#/c/66283/
> - https://review.openstack.org/#/c/62671/
> 
> There has also been a few other pieces of work going in (forgive me if I
> missed any...):
> 
> - https://review.openstack.org/#/c/64469/
> - https://review.openstack.org/#/c/69329/
> - https://review.openstack.org/#/c/64026/
> 
> I think now would be a good time (and seems like a good idea) to create
> the discussion to learn how people are using taskflow, common patterns
> people like, don't like, common refactoring idioms that are occurring and
> most importantly to make sure that we refactor with a purpose and not just
> refactor for refactoring sake (which can be harmful if not done
> correctly). So to get a kind of forward and unified momentum behind
> further adjustments I'd just like to make sure we are all aligned and
> understood on the benefits and yes even the drawbacks that these
> refactorings bring.
> 
> So here is my little list of benefits:
> 
> - Objects that do just one thing (a common pattern I am seeing is
> determining what the one thing is, without making it to granular that its
> hard to read).
> - Combining these objects together in a well-defined way (once again it
> has to be carefully done to not create to much granularness).
> - Ability to test these tasks and flows via mocking (something that is
> harder when its not split up like this).
> - Features that aren't currently used such as state-persistence (but will
> help cinder become more crash-resistant in the future).
>  - This one will itself need to be understood before doing [I started
> etherpad @ https://etherpad.openstack.org/p/cinder-taskflow-persistence
> for this].
> 
> List of drawbacks (or potential drawbacks):
> 
> - Having a understanding of what taskflow is doing adds on a new layer of
> things to know (hopefully docs help in this area, that was there goal).
> - Selecting to granular of a task or flow; makes it harder to
> follow/understand the task/flow logic.
> - Focuses on the long-term (not necessarily short-term) state-management
> concerns (can't refactor rome in a day).
> - Taskflow is being developed at the same time cinder is.
> 
> I'd be very interested in hearing about others experiences and to make
> sure that we discuss the changes (in a well documented and agreed on
> approach) before jumping to much into the 'deep end' with a large amount
> of refactoring (aka, refactoring with a purpose). Let's make this thread
> as useful as we can and try to see how we can unify all these refactorings
> behind a common (and documented & agreed-on) purpose.
> 
> A thought, for the reviews above, I think it would be very useful to
> etherpad/writeup more in the blueprint what the 'refactoring with a
> purpose' is so that its more known to future readers (and for active
> reviewers), hopefully this email can start to help clarify that purpose so
> that things proceed as smoothly as possible.
> 
> -Josh
> 
> 
> 
> 
> ------------------------------
> 
> Message: 15
> Date: Mon, 03 Feb 2014 12:59:17 -0800
> From: Dan Smith <dms at danplanet.com>
> To: "Murray, Paul (HP Cloud Services)" <pmurray at hp.com>,  "OpenStack
>        Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] do nova objects work for plugins?
> Message-ID: <52F00325.1020304 at danplanet.com>
> Content-Type: text/plain; charset=windows-1252
> 
>> Basically, if object A has object B as a child, and deserialization
>> finds object B to be an unrecognized version, it will try to back
>> port the object A to the version number of object B.
> 
> Right, which is why we rev the version of, say, the InstanceList when we
> have to rev Instance itself, and why we have unit tests to makes sure
> that happens.
> 
>> It is not reasonable to bump the version of the compute_node when
>> new external plugin is developed. So currently the versioning seems
>> too rigid to implement extensible/pluggable objects this way.
> 
> So we're talking about an out-of-tree closed-source plugin, right? IMHO,
> Nova's versioning infrastructure is in place to make Nova able to handle
> upgrades; adding requirements for supporting out-of-tree plugins
> wouldn't be high on my priority list.
> 
>> A reasonable alternative might be for all objects to be deserialized
>> individually within a tree data structure, but I?m not sure what
>> might happen to parent/child compatability without some careful
>> tracking.
> 
> I think it would probably be possible to make the deserializer specify
> the object and version it tripped over when passing the whole thing back
> to conductor to be backleveled. That seems reasonably useful to Nova itself.
> 
>> Another might be to say that nova objects are for nova use only and
>> that?s just tough for plugin writers!
> 
> Well, for the same reason we don't provide a stable virt driver API
> (among other things) I don't think we need to be overly concerned with
> allowing arbitrary bolt-on code to hook in at this point.
> 
> Your concern is, I assume, allowing a resource metric plugin to shove
> actual NovaObject items into a container object of compute node metrics?
> Is there some reason that we can't just coerce all of these to a
> dict-of-strings or dict-of-known-primitive-types to save all of this
> complication? I seem to recall the proposal that led us down this road
> being "store/communicate arbitrary JSON blobs", but certainly there is a
> happy medium?
> 
> Given that the nova meetup is next week, perhaps that would be a good
> time to actually figure out a path forward?
> 
> --Dan
> 
> 
> 
> ------------------------------
> 
> Message: 16
> Date: Tue, 4 Feb 2014 07:05:02 +1000
> From: Angus Salkeld <angus.salkeld at rackspace.com>
> To: <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Solum] Solum database schema
>        modification proposal
> Message-ID: <20140203210502.GA20699 at rackspace.com>
> Content-Type: text/plain; charset="us-ascii"; format=flowed
> 
> On 03/02/14 16:22 +0000, Paul Montgomery wrote:
>> Solum community,
>> 
>> I notice that we are using String(36) UUID values in the database schema as primary key for many new tables that we are creating.  For example:
>> https://review.openstack.org/#/c/68328/10/solum/objects/sqlalchemy/application.py
>> 
>> Proposal: Add an int or bigint ID as the primary key, instead of UUID (the UUID field remains if needed), to improve database efficiency.
>> 
>> In my experience (I briefly pinged a DBA to verify), using a relatively long field as a primary key will increase resource utilization and reduce throughput.  This will become pronounced with the database will no longer fit into memory which would likely characterize any medium-large Solum installation.  This proposal would relatively painlessly improve database efficiency before a database schema change becomes difficult (many pull requests are in flight right now for schema).
>> 
>> In order to prevent the auto-incrementing ID from leaking usage information about the system, I would recommend using the integer-based ID field internally within Solum for efficiency and do not expose this ID field to users.  Users would only see UUID or non-ID values to prevent Solum metadata from leaking.
>> 
>> Thoughts?
> 
> I am reworking my patch now to use autoinc. int for the index and
> have a seperate uuid.
> 
> -Angus
> 
>> 
> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ------------------------------
> 
> Message: 17
> Date: Mon, 3 Feb 2014 15:13:07 -0600
> From: Christopher Armstrong <chris.armstrong at rackspace.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec
>        re-written. RFC
> Message-ID:
>        <CAPkRfURj010_uhqWMLT6S8VoZv-eyK9hVgMBC+acdRmhKgtb4Q at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Heya Clint, this BP looks really good - it should significantly simplify
> the implementation of scaling if this becomes a core Heat feature. Comments
> below.
> 
> On Mon, Feb 3, 2014 at 2:46 PM, Thomas Herve <thomas.herve at enovance.com>wrote:
> 
>>> So, I wrote the original rolling updates spec about a year ago, and the
>>> time has come to get serious about implementation. I went through it and
>>> basically rewrote the entire thing to reflect the knowledge I have
>>> gained from a year of working with Heat.
>>> 
>>> Any and all comments are welcome. I intend to start implementation very
>>> soon, as this is an important component of the HA story for TripleO:
>>> 
>>> https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates
>> 
>> Hi Clint, thanks for pushing this.
>> 
>> First, I don't think RollingUpdatePattern and CanaryUpdatePattern should
>> be 2 different entities. The second just looks like a parametrization of
>> the first (growth_factor=1?).
>> 
>> 
> Agreed.
> 
> 
> 
>> I then feel that using (abusing?) depends_on for update pattern is a bit
>> weird. Maybe I'm influenced by the CFN design, but the separate
>> UpdatePolicy attribute feels better (although I would probably use a
>> property). I guess my main question is around the meaning of using the
>> update pattern on a server instance. I think I see what you want to do for
>> the group, where child_updating would return a number, but I have no idea
>> what it means for a single resource. Could you detail the operation a bit
>> more in the document?
>> 
>> 
> 
> I agree that depends_on is weird and I think it should be avoided. I'm not
> sure a property is the right decision, though, assuming that it's the heat
> engine that's dealing with the rolling updates -- I think having the engine
> reach into a resource's properties would set a strange precedent. The CFN
> design does seem pretty reasonable to me, assuming an "update_policy" field
> in a HOT resource, referring to the policy that the resource should use.
> 
> 
> It also seems that the interface you're creating
>> (child_creating/child_updating) is fairly specific to your use case. For
>> autoscaling we have a need for more generic notification system, it would
>> be nice to find common grounds. Maybe we can invert the relationship? Add a
>> "notified_resources" attribute, which would call hooks on the "parent" when
>> actions are happening.
>> 
>> 
> 
> Yeah, this would be really helpful for stuff like load balancer
> notifications (and any of a number of different resource relationships).
> 
> --
> IRC: radix
> http://twitter.com/radix
> Christopher Armstrong
> Rackspace
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/2918e641/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 18
> Date: Mon, 3 Feb 2014 20:14:57 +0000
> From: Irena Berezovsky <irenab at mellanox.com>
> To: "Sandhya Dasu (sadasu)" <sadasu at cisco.com>, "Robert Li (baoli)"
>        <baoli at cisco.com>, Robert Kukura <rkukura at redhat.com>, "OpenStack
>        Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>, "Brian Bowen (brbowen)"
>        <brbowen at cisco.com>
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
>        extra hr of discussion today
> Message-ID:
>        <9D25E123B44F4A4291F4B5C13DA94E7788308B39 at MTLDAG02.mtl.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> Seems the openstack-meeting-alt is busy, let's use openstack-meeting
> 
> From: Sandhya Dasu (sadasu) [mailto:sadasu at cisco.com]
> Sent: Monday, February 03, 2014 8:28 PM
> To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development Mailing List (not for usage questions); Brian Bowen (brbowen)
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of discussion today
> 
> Hi all,
>    Both openstack-meeting and openstack-meeting-alt are available today. Lets meet at UTC 2000 @ openstack-meeting-alt.
> 
> Thanks,
> Sandhya
> 
> From: Irena Berezovsky <irenab at mellanox.com<mailto:irenab at mellanox.com>>
> Date: Monday, February 3, 2014 12:52 AM
> To: Sandhya Dasu <sadasu at cisco.com<mailto:sadasu at cisco.com>>, "Robert Li (baoli)" <baoli at cisco.com<mailto:baoli at cisco.com>>, Robert Kukura <rkukura at redhat.com<mailto:rkukura at redhat.com>>, "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)" <brbowen at cisco.com<mailto:brbowen at cisco.com>>
> Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Hi Sandhya,
> Can you please elaborate how do you suggest to extend the below bp for SRIOV Ports managed by different Mechanism Driver?
> I am not biased to any specific direction here, just think we need common layer for managing SRIOV port at neutron, since there is a common pass between nova and neutron.
> 
> BR,
> Irena
> 
> 
> From: Sandhya Dasu (sadasu) [mailto:sadasu at cisco.com]
> Sent: Friday, January 31, 2014 6:46 PM
> To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development Mailing List (not for usage questions); Brian Bowen (brbowen)
> Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Hi Irena,
>      I was initially looking at https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info to take care of the extra information required to set up the SR-IOV port. When the scope of the BP was being decided, we had very little info about our own design so I didn't give any feedback about SR-IOV ports. But, I feel that this is the direction we should be going. Maybe we should target this in Juno.
> 
> Introducing, SRIOVPortProfileMixin would be creating yet another way to take care of extra port config. Let me know what you think.
> 
> Thanks,
> Sandhya
> 
> From: Irena Berezovsky <irenab at mellanox.com<mailto:irenab at mellanox.com>>
> Date: Thursday, January 30, 2014 4:13 PM
> To: "Robert Li (baoli)" <baoli at cisco.com<mailto:baoli at cisco.com>>, Robert Kukura <rkukura at redhat.com<mailto:rkukura at redhat.com>>, Sandhya Dasu <sadasu at cisco.com<mailto:sadasu at cisco.com>>, "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>, "Brian Bowen (brbowen)" <brbowen at cisco.com<mailto:brbowen at cisco.com>>
> Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Robert,
> Thank you very much for the summary.
> Please, see inline
> 
> From: Robert Li (baoli) [mailto:baoli at cisco.com]
> Sent: Thursday, January 30, 2014 10:45 PM
> To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack Development Mailing List (not for usage questions); Brian Bowen (brbowen)
> Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th
> 
> Hi,
> 
> We made a lot of progress today. We agreed that:
> -- vnic_type will be a top level attribute as binding:vnic_type
> -- BPs:
>     * Irena's https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for binding:vnic_type
>     * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be encapsulated in binding:profile
>     * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info will be encapsulated in binding:vif_details, which may include other information like security parameters. For SRIOV, vlan_id and profileid are candidates.
> -- new arguments for port-create will be implicit arguments. Future release may make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
> I think that currently we can make do without the profileid as an input parameter from the user. The mechanism driver will return a profileid in the vif output.
> 
> Please correct any misstatement in above.
> 
> Issues:
>  -- do we need a common utils/driver for SRIOV generic parts to be used by individual Mechanism drivers that support SRIOV? More details on what would be included in this sriov utils/driver? I'm thinking that a candidate would be the helper functions to interpret the pci_slot, which is proposed as a string. Anything else in your mind?
> [IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV port related attributes
> 
>  -- what should mechanism drivers put in binding:vif_details and how nova would use this information? as far as I see it from the code, a VIF object is created and populated based on information provided by neutron (from get network and get port)
> 
> Questions:
>  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, binding:vnic_type will not be set, I guess. Then would it be treated as a virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
> [IrenaB] vnic_type will be added as an additional attribute to binding extension. For persistency it should be added in PortBindingMixin for non ML2. I didn't think to cover it as part of ML2 vnic_type bp.
> For the rest attributes, need to see what Bob plans.
> 
> -- is a neutron agent making decision based on the binding:vif_type?  In that case, it makes sense for binding:vnic_type not to be exposed to agents.
> [IrenaB] vnic_type is input parameter that will eventually cause certain vif_type to be sent to GenericVIFDriver and create network interface. Neutron agents periodically scan for attached interfaces. For example, OVS agent will look only for OVS interfaces, so if SRIOV interface is created, it won't be discovered by OVS agent.
> 
> Thanks,
> Robert
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/2262d0a1/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 19
> Date: Mon, 3 Feb 2014 14:16:25 -0700
> From: John Griffith <john.griffith at solidfire.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Cinder + taskflow
> Message-ID:
>        <CA+qL3LW=s8utbQu2ysgEAcx0M+KsS3gb9MhA2pa5=gda2i5i4Q at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On Mon, Feb 3, 2014 at 1:53 PM, Joshua Harlow <harlowja at yahoo-inc.com> wrote:
>> Hi all,
>> 
>> After talking with john g. about taskflow in cinder and seeing more and
>> more reviews showing up I wanted to start a thread to gather all our
>> lessons learned and how we can improve a little before continuing to add
>> too many more refactoring and more reviews (making sure everyone is
>> understands the larger goal and larger picture of switching pieces of
>> cinder - piece by piece - to taskflow).
>> 
>> Just to catch everyone up.
>> 
>> Taskflow started integrating with cinder in havana and there has been some
>> continued work around these changes:
>> 
>> - https://review.openstack.org/#/c/58724/
>> - https://review.openstack.org/#/c/66283/
>> - https://review.openstack.org/#/c/62671/
>> 
>> There has also been a few other pieces of work going in (forgive me if I
>> missed any...):
>> 
>> - https://review.openstack.org/#/c/64469/
>> - https://review.openstack.org/#/c/69329/
>> - https://review.openstack.org/#/c/64026/
>> 
>> I think now would be a good time (and seems like a good idea) to create
>> the discussion to learn how people are using taskflow, common patterns
>> people like, don't like, common refactoring idioms that are occurring and
>> most importantly to make sure that we refactor with a purpose and not just
>> refactor for refactoring sake (which can be harmful if not done
>> correctly). So to get a kind of forward and unified momentum behind
>> further adjustments I'd just like to make sure we are all aligned and
>> understood on the benefits and yes even the drawbacks that these
>> refactorings bring.
>> 
>> So here is my little list of benefits:
>> 
>> - Objects that do just one thing (a common pattern I am seeing is
>> determining what the one thing is, without making it to granular that its
>> hard to read).
>> - Combining these objects together in a well-defined way (once again it
>> has to be carefully done to not create to much granularness).
>> - Ability to test these tasks and flows via mocking (something that is
>> harder when its not split up like this).
>> - Features that aren't currently used such as state-persistence (but will
>> help cinder become more crash-resistant in the future).
>>  - This one will itself need to be understood before doing [I started
>> etherpad @ https://etherpad.openstack.org/p/cinder-taskflow-persistence
>> for this].
>> 
>> List of drawbacks (or potential drawbacks):
>> 
>> - Having a understanding of what taskflow is doing adds on a new layer of
>> things to know (hopefully docs help in this area, that was there goal).
>> - Selecting to granular of a task or flow; makes it harder to
>> follow/understand the task/flow logic.
>> - Focuses on the long-term (not necessarily short-term) state-management
>> concerns (can't refactor rome in a day).
>> - Taskflow is being developed at the same time cinder is.
>> 
>> I'd be very interested in hearing about others experiences and to make
>> sure that we discuss the changes (in a well documented and agreed on
>> approach) before jumping to much into the 'deep end' with a large amount
>> of refactoring (aka, refactoring with a purpose). Let's make this thread
>> as useful as we can and try to see how we can unify all these refactorings
>> behind a common (and documented & agreed-on) purpose.
>> 
>> A thought, for the reviews above, I think it would be very useful to
>> etherpad/writeup more in the blueprint what the 'refactoring with a
>> purpose' is so that its more known to future readers (and for active
>> reviewers), hopefully this email can start to help clarify that purpose so
>> that things proceed as smoothly as possible.
>> 
>> -Josh
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> Thanks for putting this together Josh, I just wanted to add a couple
> of things from my own perspective.
> 
> The end-goals of taskflow (specifically persistence and better state
> managment) are the motivating factors for going this route.  We've
> made a first step with create_volume however we haven't advanced it
> enough to realize the benefits that we set out to gain by this in the
> first place.  I still think it's the right direction and IMO we should
> keep on the path, however there are a number of things that I've
> noticed that make me lean towards refraining from moving other API
> calls to taskflow right now.
> 
> 1. Currently taskflow is pretty much a functional equivalent
> replacement of what was in the volume manager.  We're not really
> gaining that much from it (yet).
> 
> 2. taskflow adds quite a bit of code and indirection that currently
> IMHO adds a bit of complexity and difficulty in trouble-shooting (I
> think we're fixing this up and it will continue to get better, I also
> think this is normal for introduction of new implementations, no
> criticism intended).
> 
> 3. Our unit testing / mock infrastructure is broken right now for
> items that use taskflow.  Particularly cinder.test.test_volume can not
> be run independently until we fix the taskflow fakes and mock objects.
> I def don't want anything else taskflow related merged until this
> problem is addressed.
> 
> 4. We really haven't come up with solutions to the problems we set out
> to solve in the first place with our first implementation of taskflow
> (state management and persistence).  Until we have a pattern for
> solving this I think we should refrain from implementing it in other
> calls.  A number of people volunteered to work on this at the summit
> in Hong Kong and have stated that they "have code" however that code
> or those patches haven't materialized so I think we need to regroup
> and get this work moving again.
> 
> Anyway, I'd like to stabilize things for the create_volume
> implementation that we have and have a clear well defined pattern that
> solves problems before we go crazy refactoring every API call to use
> taskflow and assume all of the potential risk that goes along with it.
> 
> Thanks,
> John
> 
> 
> 
> ------------------------------
> 
> Message: 20
> Date: Mon, 3 Feb 2014 13:58:28 -0800
> From: Vishvananda Ishaya <vishvananda at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical
>        Multitenancy    Discussion
> Message-ID: <BFA9750F-9913-4F39-B18C-7BCE47D8AD4D at gmail.com>
> Content-Type: text/plain; charset="windows-1252"
> 
> Hello Again!
> 
> At the meeting last week we discussed some options around getting true multitenancy in nova. The use case that we are trying to support can be described as follows:
> 
> "Martha, the owner of ProductionIT provides it services to multiple Enterprise clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and Development teams with many users. Joe needs the ability create users, projects, and quotas, as well as the ability to list and delete resources across WidgetMaster. Martha needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, projects, and objects across the entire system; and set quotas for the client companies as a whole. She also needs to ensure that Joe can't see or mess with anything owned by Sam."
> 
> As per the plan I outlined in the meeting I have implemented a Proof-of-Concept that would allow me to see what changes were required in nova to get scoped tenancy working. I used a simple approach of faking out heirarchy by prepending the id of the larger scope to the id of the smaller scope. Keystone uses uuids internally, but for ease of explanation I will pretend like it is using the name. I think we can all agree that ?orga.projecta? is more readable than ?b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8?.
> 
> The code basically creates the following five projects:
> 
> orga
> orga.projecta
> orga.projectb
> orgb
> orgb.projecta
> 
> I then modified nova to replace everywhere where it searches or limits policy by project_id to do a prefix match. This means that someone using project ?orga? should be able to list/delete instances in orga, orga.projecta, and orga.projectb.
> 
> You can find the code here:
> 
>  https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
>  https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f
> 
> Keeping in mind that this is a prototype, but I?m hoping to come to some kind of consensus as to whether this is a reasonable approach. I?ve compiled a list of pros and cons.
> 
> Pros:
> 
>  * Very easy to understand
>  * Minimal changes to nova
>  * Good performance in db (prefix matching uses indexes)
>  * Could be extended to cover more complex scenarios like multiple owners or multiple scopes
> 
> Cons:
> 
>  * Nova has no map of the hierarchy
>  * Moving projects would require updates to ownership inside of nova
>  * Complex scenarios involving delegation of roles may be a bad fit
>  * Database upgrade to hierarchy could be tricky
> 
> If this seems like a reasonable set of tradeoffs, there are a few things that need to be done inside of nova bring this to a complete solution:
> 
>  * Prefix matching needs to go into oslo.policy
>  * Should the tenant_id returned by the api reflect the full ?orga.projecta?, or just the child ?projecta? or match the scope: i.e. the first if you are authenticated to orga and the second if you are authenticated to the project?
>  * Possible migrations for existing project_id fields
>  * Use a different field for passing ownership scope instead of overloading project_id
>  * Figure out how nested quotas should work
>  * Look for other bugs relating to scoping
> 
> Also, we need to decide how keystone should construct and pass this information to the services. The obvious case that could be supported today would be to allow a single level of hierarchy using domains. For example, if domains are active, keystone could pass domain.project_id for ownership_scope. This could be controversial because potentially domains are just for grouping users and shouldn?t be applied to projects.
> 
> I think the real value of this approach would be to allow nested projects with role inheritance. When keystone is creating the token, it could walk the tree of parent projects, construct the set of roles, and construct the ownership_scope as it walks to the root of the tree.
> 
> Finally, similar fixes will need to be made in the other projects to bring this to a complete solution.
> 
> Please feel free to respond with any input, and we will be having another Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.
> 
> Vish
> 
> On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
> 
>> Hi Everyone,
>> 
>> I apologize for the obtuse title, but there isn't a better succinct term to describe what is needed. OpenStack has no support for multiple owners of objects. This means that a variety of private cloud use cases are simply not supported. Specifically, objects in the system can only be managed on the tenant level or globally.
>> 
>> The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a ?domain? which supports part of this functionality, but without support from all of the projects, this concept is pretty useless.
>> 
>> In IRC today I had a brief discussion about how we could address this. I have put some details and a straw man up here:
>> 
>> https://wiki.openstack.org/wiki/HierarchicalMultitenancy
>> 
>> I would like to discuss this strawman and organize a group of people to get actual work done by having an irc meeting this Friday at 1600UTC. I know this time is probably a bit tough for Europe, so if we decide we need a regular meeting to discuss progress then we can vote on a better time for this meeting.
>> 
>> https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
>> 
>> Please note that this is going to be an active team that produces code. We will *NOT* spend a lot of time debating approaches, and instead focus on making something that works and learning as we go. The output of this team will be a MultiTenant devstack install that actually works, so that we can ensure the features we are adding to each project work together.
>> 
>> Vish
> 
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 455 bytes
> Desc: Message signed with OpenPGP using GPGMail
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/ad87da96/attachment-0001.pgp>
> 
> ------------------------------
> 
> Message: 21
> Date: Mon, 03 Feb 2014 14:09:35 -0800
> From: Clint Byrum <clint at fewbar.com>
> To: openstack-dev <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec
>        re-written. RFC
> Message-ID: <1391464621-sup-4293 at fewbar.com>
> Content-Type: text/plain; charset=UTF-8
> 
> Excerpts from Thomas Herve's message of 2014-02-03 12:46:05 -0800:
>>> So, I wrote the original rolling updates spec about a year ago, and the
>>> time has come to get serious about implementation. I went through it and
>>> basically rewrote the entire thing to reflect the knowledge I have
>>> gained from a year of working with Heat.
>>> 
>>> Any and all comments are welcome. I intend to start implementation very
>>> soon, as this is an important component of the HA story for TripleO:
>>> 
>>> https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates
>> 
>> Hi Clint, thanks for pushing this.
>> 
>> First, I don't think RollingUpdatePattern and CanaryUpdatePattern should be 2 different entities. The second just looks like a parametrization of the first (growth_factor=1?).
> 
> Perhaps they can just be one. Until I find parameters which would need
> to mean something different, I'll just use UpdatePattern.
> 
>> 
>> I then feel that using (abusing?) depends_on for update pattern is a bit weird. Maybe I'm influenced by the CFN design, but the separate UpdatePolicy attribute feels better (although I would probably use a property). I guess my main question is around the meaning of using the update pattern on a server instance. I think I see what you want to do for the group, where child_updating would return a number, but I have no idea what it means for a single resource. Could you detail the operation a bit more in the document?
>> 
> 
> I would be o-k with adding another keyword. The idea in abusing depends_on
> is that it changes the core language less. Properties is definitely out
> for the reasons Christopher brought up, properties is really meant to
> be for the resource's end target only.
> 
> UpdatePolicy in cfn is a single string, and causes very generic rolling
> update behavior. I want this resource to be able to control multiple
> groups as if they are one in some cases (Such as a case where a user
> has migrated part of an app to a new type of server, but not all.. so
> they will want to treat the entire aggregate as one rolling update).
> 
> I'm o-k with overloading it to allow resource references, but I'd like
> to hear more people take issue with depends_on before I select that
> course.
> 
> To answer your question, using it with a server instance allows
> rolling updates across non-grouped resources. In the example the
> rolling_update_dbs does this.
> 
>> It also seems that the interface you're creating (child_creating/child_updating) is fairly specific to your use case. For autoscaling we have a need for more generic notification system, it would be nice to find common grounds. Maybe we can invert the relationship? Add a "notified_resources" attribute, which would call hooks on the "parent" when actions are happening.
>> 
> 
> I'm open to a different interface design. I don't really have a firm
> grasp of the generic behavior you'd like to model though. This is quite
> concrete and would be entirely hidden from template authors, though not
> from resource plugin authors. Attributes sound like something where you
> want the template authors to get involved in specifying, but maybe that
> was just an overloaded term.
> 
> So perhaps we can replace this interface with the generic one when your
> use case is more clear?
> 
> 
> 
> ------------------------------
> 
> Message: 22
> Date: Mon, 03 Feb 2014 17:10:34 -0500
> From: Trevor McKay <tmckay at redhat.com>
> To: openstack-dev at lists.openstack.org
> Subject: [openstack-dev] [savanna] Specific job type for streaming
>        mapreduce? (and someday pipes)
> Message-ID: <1391465434.9655.14.camel at tmckaylt.rdu.redhat.com>
> Content-Type: text/plain; charset="UTF-8"
> 
> 
> I was trying my best to avoid adding extra job types to support
> mapreduce variants like streaming or mapreduce with pipes, but it seems
> that adding the types is the simplest solution.
> 
> On the API side, Savanna can live without a specific job type by
> examining the data in the job record.  Presence/absence of certain
> things, or null values, etc, can provide adequate indicators to what
> kind of mapreduce it is.  Maybe a little bit subtle.
> 
> But for the UI, it seems that explicit knowledge of what the job is
> makes things easier and better for the user.  When a user creates a
> streaming mapreduce job and the UI is aware of the type later on at job
> launch, the user can be prompted to provide the right configs (i.e., the
> streaming mapper and reducer values).
> 
> The explicit job type also supports validation without having to add
> extra flags (which impacts the savanna client, and the JSON, etc). For
> example, a streaming mapreduce job does not require any specified
> libraries so the fact that it is meant to be a streaming job needs to be
> known at job creation time.
> 
> So, to that end, I propose that we add a MapReduceStreaming job type,
> and probably at some point we will have MapReducePiped too. It's
> possible that we might have other job types in the future too as the
> feature set grows.
> 
> There was an effort to make Savanna job types parallel Oozie action
> types, but in this case that's just not possible without introducing a
> "subtype" field in the job record, which leads to a database migration
> script and savanna client changes.
> 
> What do you think?
> 
> Best,
> 
> Trevor
> 
> 
> 
> 
> 
> ------------------------------
> 
> Message: 23
> Date: Mon, 3 Feb 2014 17:19:35 -0500
> From: Paul Michali <pcm at cisco.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Neutron] Interest in discussing vendor
>        plugins for     L3 services?
> Message-ID: <3F7D62C1-5DDB-404D-AED8-51C647D6D6B0 at cisco.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> I'd like to see if there is interest in discussing vendor plugins for L3 services. The goal is to strive for consistency across vendor plugins/drivers and across service types (if possible/sensible). Some of this could/should apply to reference drivers as well. I'm thinking about these topics (based on questions I've had on VPNaaS - feel free to add to the list):
> 
> How to handle vendor specific validation (e.g. say a vendor has restrictions or added capabilities compared to the reference drivers for attributes).
> Providing "client" feedback (e.g. should help and validation be extended to include vendor capabilities or should it be delegated to server reporting?)
> Handling and reporting of errors to the user (e.g. how to indicate to the user that a failure has occurred establishing a IPSec tunnel in device driver?)
> Persistence of vendor specific information (e.g. should new tables be used or should/can existing reference tables be extended?).
> Provider selection for resources (e.g. should we allow --provider attribute on VPN IPSec policies to have vendor specific policies or should we rely on checks at connection creation for policy compatibility?)
> Handling of multiple device drivers per vendor (e.g. have service driver determine which device driver to send RPC requests, or have agent determine what driver requests should go to - say based on the router type)
> If you have an interest, please reply to me and include some days/times that would be good for you, and I'll send out a notice on the ML of the time/date and we can discuss.
> 
> Looking to hearing form you!
> 
> PCM (Paul Michali)
> 
> MAIL          pcm at cisco.com
> IRC            pcm_  (irc.freenode.net)
> TW            @pmichali
> GPG key    4525ECC253E31A83
> Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/18001a9c/attachment-0001.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 841 bytes
> Desc: Message signed with OpenPGP using GPGMail
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/18001a9c/attachment-0001.pgp>
> 
> ------------------------------
> 
> Message: 24
> Date: Mon, 3 Feb 2014 15:30:56 -0700
> From: Carl Baldwin <carl at ecbaldwin.net>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] Assigning a floating IP to an
>        internal network
> Message-ID:
>        <CALiLy7oi63mH9frT3hhxjv24QgS=+yXMx1kH0PMDNTVEoE_VUQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
> 
> I have looked at the code that you posted. I am concerned that there
> are db queries performed inside nested loops.  The approach looks
> sound from a functional perspective but I think these loops will run
> very slowly and increase pressure on the db.
> 
> I tend to think that if a router has an extra route on it then we can
> take it at its word that IPs in the scope of the extra route would be
> reachable from the router.  In the absence of running a dynamic
> routing protocol, that is what is typically done by a router.
> 
> Maybe you could use an example to expound on your concerns that we'll
> pick the wrong router.  Without a specific example in mind, I tend to
> think that we should leave it up to the tenants to avoid the ambiguity
> that would get us in to this predicament by using mutually exclusive
> subnets on their various networks, especially where there are
> different routers involved.
> 
> You could use a phased approach where you first hammer out the simpler
> approach and follow-on with an enhancement for the more complicated
> approach.  It would allow progress to be made on the patch that you
> have up and more time to think about the need for the more complex
> approach.  You could mark that the first patch partially implements
> the blueprint.
> 
> Carl
> 
> 
> 
> On Thu, Jan 30, 2014 at 6:21 AM, Ofer Barkai <ofer at checkpoint.com> wrote:
>> Hi all,
>> 
>> During the implementation of:
>> https://blueprints.launchpad.net/neutron/+spec/floating-ip-extra-route
>> 
>> Which suggest allowing assignment of floating IP to internal address
>> not directly connected to the router, if there is a route configured on
>> the router to the internal address.
>> 
>> In: https://review.openstack.org/55987
>> 
>> There seem to be 2 possible approaches for finding an appropriate
>> router for a floating IP assignment, while considering extra routes:
>> 
>> 1. Use the first router that has a route matching the internal address
>> which is the target of the floating IP.
>> 
>> 2. Use the first router that has a matching route, _and_ verify that
>> there exists a path of connected devices to the network object to
>> which the internal address belongs.
>> 
>> The first approach solves the simple case of a gateway on a compute
>> hosts that protects an internal network (which is the motivation for
>> this enhancement).
>> 
>> However, if the same (or overlapping) addresses are assigned to
>> different internal networks, there is a risk that the first approach
>> might find the wrong router.
>> 
>> Still, the second approach might force many DB lookups to trace the path from
>> the router to the internal network. This overhead might not be
>> desirable if the use case does not (at least, initially) appear in the
>> real world.
>> 
>> Patch set 6 presents the first, lightweight approach, and Patch set 5
>> presents the second, more accurate approach.
>> 
>> I would appreciate the opportunity to get more points of view on this subject.
>> 
>> Thanks,
>> 
>> -Ofer
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ------------------------------
> 
> Message: 25
> Date: Mon, 3 Feb 2014 14:44:22 -0800
> From: Hemanth Ravi <hemanthraviml at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] Interest in discussing vendor
>        plugins for L3 services?
> Message-ID:
>        <CAP3yDp2qbq1iH2E_su6uwf_+_4obM0T=v79Kc1e3fgTJJYgf7Q at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi,
> 
> I would be interested in this discussion. Below are some time slot
> suggestions:
> 
> Mon: 19:00, 20:00 UTC (11:00, 12:00 PST)
> Wed: 20:00, 21:00 UTC (12:00, 13:00 PST)
> Thu: 19:00, 20:00, 21:00 UTC (11:00, 12:00, 13:00 PST)
> 
> Thanks,
> -hemanth
> 
> 
> On Mon, Feb 3, 2014 at 2:19 PM, Paul Michali <pcm at cisco.com> wrote:
> 
>> I'd like to see if there is interest in discussing vendor plugins for L3
>> services. The goal is to strive for consistency across vendor
>> plugins/drivers and across service types (if possible/sensible). Some of
>> this could/should apply to reference drivers as well. I'm thinking about
>> these topics (based on questions I've had on VPNaaS - feel free to add to
>> the list):
>> 
>> 
>>   - How to handle vendor specific validation (e.g. say a vendor has
>>   restrictions or added capabilities compared to the reference drivers for
>>   attributes).
>>   - Providing "client" feedback (e.g. should help and validation be
>>   extended to include vendor capabilities or should it be delegated to server
>>   reporting?)
>>   - Handling and reporting of errors to the user (e.g. how to indicate
>>   to the user that a failure has occurred establishing a IPSec tunnel in
>>   device driver?)
>>   - Persistence of vendor specific information (e.g. should new tables
>>   be used or should/can existing reference tables be extended?).
>>   - Provider selection for resources (e.g. should we allow --provider
>>   attribute on VPN IPSec policies to have vendor specific policies or should
>>   we rely on checks at connection creation for policy compatibility?)
>>   - Handling of multiple device drivers per vendor (e.g. have service
>>   driver determine which device driver to send RPC requests, or have agent
>>   determine what driver requests should go to - say based on the router type)
>> 
>> If you have an interest, please reply to me and include some days/times
>> that would be good for you, and I'll send out a notice on the ML of the
>> time/date and we can discuss.
>> 
>> Looking to hearing form you!
>> 
>> PCM (Paul Michali)
>> 
>> MAIL          pcm at cisco.com
>> IRC            pcm_  (irc.freenode.net)
>> TW            @pmichali
>> GPG key    4525ECC253E31A83
>> Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/a37cc411/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 26
> Date: Mon, 3 Feb 2014 14:57:59 -0800
> From: Andrew Lazarev <alazarev at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [savanna] Specific job type for streaming
>        mapreduce? (and someday pipes)
> Message-ID:
>        <CANzyysgV8DcO_O=ZC_TFvkGFX3Hfn7xittEzB0gQA4XFb=+Neg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> I see two points:
> * having Savanna types mapped to Oozie action types is intuitive for hadoop
> users and this is something we would like to keep
> * it is hard to distinguish different kinds of one job type
> 
> Adding 'subtype' field will solve both problems. Having it optional will
> not break backward compatibility. Adding database migration
> script is also pretty straightforward.
> 
> Summarizing, my vote is on "subtype" field.
> 
> Thanks,
> Andrew.
> 
> 
> On Mon, Feb 3, 2014 at 2:10 PM, Trevor McKay <tmckay at redhat.com> wrote:
> 
>> 
>> I was trying my best to avoid adding extra job types to support
>> mapreduce variants like streaming or mapreduce with pipes, but it seems
>> that adding the types is the simplest solution.
>> 
>> On the API side, Savanna can live without a specific job type by
>> examining the data in the job record.  Presence/absence of certain
>> things, or null values, etc, can provide adequate indicators to what
>> kind of mapreduce it is.  Maybe a little bit subtle.
>> 
>> But for the UI, it seems that explicit knowledge of what the job is
>> makes things easier and better for the user.  When a user creates a
>> streaming mapreduce job and the UI is aware of the type later on at job
>> launch, the user can be prompted to provide the right configs (i.e., the
>> streaming mapper and reducer values).
>> 
>> The explicit job type also supports validation without having to add
>> extra flags (which impacts the savanna client, and the JSON, etc). For
>> example, a streaming mapreduce job does not require any specified
>> libraries so the fact that it is meant to be a streaming job needs to be
>> known at job creation time.
>> 
>> So, to that end, I propose that we add a MapReduceStreaming job type,
>> and probably at some point we will have MapReducePiped too. It's
>> possible that we might have other job types in the future too as the
>> feature set grows.
>> 
>> There was an effort to make Savanna job types parallel Oozie action
>> types, but in this case that's just not possible without introducing a
>> "subtype" field in the job record, which leads to a database migration
>> script and savanna client changes.
>> 
>> What do you think?
>> 
>> Best,
>> 
>> Trevor
>> 
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/d0dc170b/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 27
> Date: Mon, 3 Feb 2014 17:31:57 -0600
> From: Dean Troyer <dtroyer at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Ugly Hack to deal with multiple versions
> Message-ID:
>        <CAOJFoEt8vcJ_TF8Es6yneCUP06Pa6A1x5-neJiXKkYfKh84GBA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Mon, Feb 3, 2014 at 1:50 PM, Adam Young <ayoung at redhat.com> wrote:
>> 
>> HACK:  In a new client, look at the URL.  If it ends with /v2.0, chop it
>> off and us the substring up to that point.
>> 
>> Now, at this point you are probably going:  That is ugly, is it really
>> necessary?  Can't we do something more correct?
>> 
> 
> At this point I think we are stuck with hard-coding some legacy
> compatibility like this for the near future.  Fortunately Identity is an
> easy one to handle, Compute is going to be a #$^%! as the commonly
> documented case has a version not at the end.
> 
> I've been playing with variations on this strategy and I think it is our
> least bad option...
> 
> Can we accept that this is necessary, and vow to never let this happen
>> again by removing the versions from the URLs after the current set of
>> clients are deprecated?
>> 
> 
> +1
> 
> There is another hack to think about:  if public_endpoint and/or
> admin_endpoint are not set in keystone.conf, all of the discovered urls use
> localhost: http://localhost:8770/v2.0/.  Discovery falls over aga
> 
> I don't know how common this is but I have encountered it at least once or
> twice.  Is this the only place those config values are used?  It seems like
> a better default could be worked out here too;  is 'localhost' ever the
> right thing to advertise in a real-world deployment?
> 
> dt
> 
> --
> 
> Dean Troyer
> dtroyer at gmail.com
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/365d266c/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 28
> Date: Tue, 4 Feb 2014 10:10:55 +1030
> From: Christopher Yeoh <cbkyeoh at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Ugly Hack to deal with multiple versions
> Message-ID:
>        <CANCY3ed=EghyQ3PA2tMW+TQeqjKHMLi1sWuHrfmBjAPXnEWNJA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Tue, Feb 4, 2014 at 6:20 AM, Adam Young <ayoung at redhat.com> wrote:
> 
>> We have to support old clients.
>> Old clients expect that the URL that comes back for the service catalog
>> has the version in it.
>> Old clients don't do version negotiation.
>> 
>> Thus, we need an approach to not-break old clients while we politely
>> encourage the rest of the world to move to later APIs.
>> 
>> 
>> I know Keystone has this problem.  I've heard that some of the other
>> services do as well.  Here is what I propose.  It is ugly, but it is a
>> transition plan, and can be disabled once the old clients are deprecated:
>> 
>> HACK:  In a new client, look at the URL.  If it ends with /v2.0, chop it
>> off and us the substring up to that point.
>> 
>> 
> +1 to this. I agree its ugly, but I think its the least-worst solution.
> Nova certainly has this problem with the url including the version suffix
> in the service catalog.
> 
> Chris
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140204/f364765b/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 29
> Date: Mon, 3 Feb 2014 15:45:47 -0800
> From: Hemanth Ravi <hemanthraviml at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev]  [Neutron] Adding package to requirements.txt
> Message-ID:
>        <CAP3yDp3ZV0GVYv2gA53nOLgOR9=Y30PhxyX0KBietp1XKVdTeQ at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi,
> 
> We are in the process of submitting a third party Neutron plugin that uses
> urllib3 for the connection pooling feature available in urllib3. httplib2
> doesn't provide this capability.
> 
> Is it possible to add urllib3 to requirements.txt? If this is OK, please
> advise on the process to add this.
> 
> Thanks,
> -hemanth
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/94cc08c0/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 30
> Date: Mon, 3 Feb 2014 15:53:00 -0800
> From: Devananda van der Veen <devananda.vdv at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Ironic] PXE driver deploy issues
> Message-ID:
>        <CAExZKEo99ZuSejZGxhqiwcvz5CVpiQtoP1DVmpsT_GC2RRdMKw at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Fri, Jan 31, 2014 at 12:13 PM, Devananda van der Veen <
> devananda.vdv at gmail.com> wrote:
> 
>> I think your driver should implement a wrapper around both VendorPassthru
>> interfaces and call each appropriately, depending on the request. This
>> keeps each VendorPassthru driver separate, and encapsulates the logic about
>> when to call each of them in the driver layer.
>> 
>> 
> I've posted an example of this here:
> 
>  https://review.openstack.org/#/c/70863/
> 
> -Deva
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/77314311/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 31
> Date: Tue, 4 Feb 2014 09:25:51 +0900
> From: Jae Sang Lee <hyangii at gmail.com>
> To: OpenStack Development Mailing List
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [nova] bp proposal: configurable locked vm
>        api
> Message-ID:
>        <CAKrFU7Uak_y+4aCVbNb70yFUX-tuYJt9NaSOMsOxELRK+wiU1w at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi, Stackers.
> 
> The deadline for icehouse comes really quickly and I understand that there
> are a lot of work todo, but I would like get your attention about my
> blueprint for configurable locked vm api.
> 
> - https://blueprints.launchpad.net/nova/+spec/configurable-locked-vm-api
> 
> So far, developer places the decoration(@check_instance_lock) in the
> function's declaration,
> for example)
>    @wrap_check_policy
>    @check_instance_lock
>    @check_instance_cell
>    @check_instance_state(vm_state=None, task_state=None,
>                          must_have_launched=False)
>    def delete(self, context, instance):
>        """Terminate an instance."""
>        LOG.debug(_("Going to try to terminate instance"),
> instance=instance)
>        self._delete_instance(context, instance)
> 
> So good, but when administrator want to change API policy for locked vm,
> admin must modify source code, and restart.
> 
> I suggest nova api do check api list for locked vm using config file like
> policy.json. It just modify a config file, not a code
> and don't need to service restart.
> 
> Can you take a small amount of time to approve a blueprint for icehouse-3?
> 
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140204/4ac46a86/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 32
> Date: Tue, 4 Feb 2014 00:41:26 +0000
> From: Joshua Harlow <harlowja at yahoo-inc.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>, John Griffith
>        <john.griffith at solidfire.com>
> Cc: Yassine lamgarchal <yassine.lamgarchal at enovance.com>
> Subject: Re: [openstack-dev] Cinder + taskflow
> Message-ID: <CF15734F.5531B%harlowja at yahoo-inc.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Thanks john for the input.
> 
> Hopefully we can help focus some of the refactoring on solving the
> state-management problem very soon.
> 
> For the mocking case, is there any active work being done here?
> 
> As for the state-management and persistence, I think that the goal of both
> of these will be reached and it is a good idea to focus around these
> problems and I am all in to figuring out those solutions, although my
> guess is that both of these will be long-term no matter what. Refactoring
> cinder from what it is to what it could/can be will take time (and should
> take time, to be careful and meticulous) and hopefully we can ensure that
> focus is retained. Since in the end it benefits everyone :)
> 
> Lets reform around that state-management issue (which involved a
> state-machine concept?). To me the current work/refactoring helps
> establish tasks objects that can be plugged into this machine (which is
> part of the problem, without task objects its hard to create a
> state-machine concept around code that is dispersed). To me that?s where
> the current refactoring work helps (in identifying those tasks and
> adjusting code to be closer to smaller units that do a single task), later
> when a state-machine concept (or something similar) comes along with will
> be using these tasks (or variations of) to automate transitions based on
> given events (the flow concept that exists in taskflow is similar to this
> already).
> 
> The questions I had (or can currently think of) with the state-machine
> idea (versus just defined flows of tasks) are:
> 
> 1. What are the events that trigger a state-machine to transition?
>  - Typically some type of event causes a machine to transition to a new
> state (after performing some kind of action). Who initiates that
> transition.
> 2. What are the events that will cause this triggering? They are likely
> related directly to API requests (but may not be).
> 3. If a state-machine ends up being created, how does it interact with
> other state-machines that are also running at the same time (does it?)
>  - This is a bigger question, and involves how one state-machine could be
> modifying a resource, while another one could be too (this is where u want
> only one state-machine to be modifying a resource at a time). This would
> solve some of the races that are currently existent (while introducing the
> complexity of distributed locking).
>  - It is of my opnion that the same problem in #3 happens when using task
> and flows that also affect simulatenous resources; so its not a unique
> problem that is directly connected to flows. Part of this I am hoping the
> tooz project[1] can help with, since last time I checked they want to help
> make a nice API around distributed locking backends (among other similar
> APIs).
> 
> [1] https://github.com/stackforge/tooz#tooz
> 
> -----Original Message-----
> From: John Griffith <john.griffith at solidfire.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Date: Monday, February 3, 2014 at 1:16 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Cinder + taskflow
> 
>> On Mon, Feb 3, 2014 at 1:53 PM, Joshua Harlow <harlowja at yahoo-inc.com>
>> wrote:
>>> Hi all,
>>> 
>>> After talking with john g. about taskflow in cinder and seeing more and
>>> more reviews showing up I wanted to start a thread to gather all our
>>> lessons learned and how we can improve a little before continuing to add
>>> too many more refactoring and more reviews (making sure everyone is
>>> understands the larger goal and larger picture of switching pieces of
>>> cinder - piece by piece - to taskflow).
>>> 
>>> Just to catch everyone up.
>>> 
>>> Taskflow started integrating with cinder in havana and there has been
>>> some
>>> continued work around these changes:
>>> 
>>> - https://review.openstack.org/#/c/58724/
>>> - https://review.openstack.org/#/c/66283/
>>> - https://review.openstack.org/#/c/62671/
>>> 
>>> There has also been a few other pieces of work going in (forgive me if I
>>> missed any...):
>>> 
>>> - https://review.openstack.org/#/c/64469/
>>> - https://review.openstack.org/#/c/69329/
>>> - https://review.openstack.org/#/c/64026/
>>> 
>>> I think now would be a good time (and seems like a good idea) to create
>>> the discussion to learn how people are using taskflow, common patterns
>>> people like, don't like, common refactoring idioms that are occurring
>>> and
>>> most importantly to make sure that we refactor with a purpose and not
>>> just
>>> refactor for refactoring sake (which can be harmful if not done
>>> correctly). So to get a kind of forward and unified momentum behind
>>> further adjustments I'd just like to make sure we are all aligned and
>>> understood on the benefits and yes even the drawbacks that these
>>> refactorings bring.
>>> 
>>> So here is my little list of benefits:
>>> 
>>> - Objects that do just one thing (a common pattern I am seeing is
>>> determining what the one thing is, without making it to granular that
>>> its
>>> hard to read).
>>> - Combining these objects together in a well-defined way (once again it
>>> has to be carefully done to not create to much granularness).
>>> - Ability to test these tasks and flows via mocking (something that is
>>> harder when its not split up like this).
>>> - Features that aren't currently used such as state-persistence (but
>>> will
>>> help cinder become more crash-resistant in the future).
>>>  - This one will itself need to be understood before doing [I started
>>> etherpad @ https://etherpad.openstack.org/p/cinder-taskflow-persistence
>>> for this].
>>> 
>>> List of drawbacks (or potential drawbacks):
>>> 
>>> - Having a understanding of what taskflow is doing adds on a new layer
>>> of
>>> things to know (hopefully docs help in this area, that was there goal).
>>> - Selecting to granular of a task or flow; makes it harder to
>>> follow/understand the task/flow logic.
>>> - Focuses on the long-term (not necessarily short-term) state-management
>>> concerns (can't refactor rome in a day).
>>> - Taskflow is being developed at the same time cinder is.
>>> 
>>> I'd be very interested in hearing about others experiences and to make
>>> sure that we discuss the changes (in a well documented and agreed on
>>> approach) before jumping to much into the 'deep end' with a large amount
>>> of refactoring (aka, refactoring with a purpose). Let's make this thread
>>> as useful as we can and try to see how we can unify all these
>>> refactorings
>>> behind a common (and documented & agreed-on) purpose.
>>> 
>>> A thought, for the reviews above, I think it would be very useful to
>>> etherpad/writeup more in the blueprint what the 'refactoring with a
>>> purpose' is so that its more known to future readers (and for active
>>> reviewers), hopefully this email can start to help clarify that purpose
>>> so
>>> that things proceed as smoothly as possible.
>>> 
>>> -Josh
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> Thanks for putting this together Josh, I just wanted to add a couple
>> of things from my own perspective.
>> 
>> The end-goals of taskflow (specifically persistence and better state
>> managment) are the motivating factors for going this route.  We've
>> made a first step with create_volume however we haven't advanced it
>> enough to realize the benefits that we set out to gain by this in the
>> first place.  I still think it's the right direction and IMO we should
>> keep on the path, however there are a number of things that I've
>> noticed that make me lean towards refraining from moving other API
>> calls to taskflow right now.
>> 
>> 1. Currently taskflow is pretty much a functional equivalent
>> replacement of what was in the volume manager.  We're not really
>> gaining that much from it (yet).
>> 
>> 2. taskflow adds quite a bit of code and indirection that currently
>> IMHO adds a bit of complexity and difficulty in trouble-shooting (I
>> think we're fixing this up and it will continue to get better, I also
>> think this is normal for introduction of new implementations, no
>> criticism intended).
>> 
>> 3. Our unit testing / mock infrastructure is broken right now for
>> items that use taskflow.  Particularly cinder.test.test_volume can not
>> be run independently until we fix the taskflow fakes and mock objects.
>> I def don't want anything else taskflow related merged until this
>> problem is addressed.
>> 
>> 4. We really haven't come up with solutions to the problems we set out
>> to solve in the first place with our first implementation of taskflow
>> (state management and persistence).  Until we have a pattern for
>> solving this I think we should refrain from implementing it in other
>> calls.  A number of people volunteered to work on this at the summit
>> in Hong Kong and have stated that they "have code" however that code
>> or those patches haven't materialized so I think we need to regroup
>> and get this work moving again.
>> 
>> Anyway, I'd like to stabilize things for the create_volume
>> implementation that we have and have a clear well defined pattern that
>> solves problems before we go crazy refactoring every API call to use
>> taskflow and assume all of the potential risk that goes along with it.
>> 
>> Thanks,
>> John
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ------------------------------
> 
> Message: 33
> Date: Tue, 4 Feb 2014 11:29:11 +1030
> From: Christopher Yeoh <cbkyeoh at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] Putting nova-network support into
>        the V3  API
> Message-ID:
>        <CANCY3eeuJ75jEtA7V8R8NKoi+3RMK_A9RmO2JLSTkLH34sW5RQ at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Tue, Feb 4, 2014 at 4:32 AM, Joe Gordon <joe.gordon0 at gmail.com> wrote:
> 
>> On Thu, Jan 30, 2014 at 10:45 PM, Christopher Yeoh <cbkyeoh at gmail.com>
>> wrote:
>>> So with it now looking like nova-network won't go away for the forseable
>>> future, it looks like we'll want nova-network support in the Nova V3 API
>>> after all. I've created a blueprint for this work here:
>>> 
>>> https://blueprints.launchpad.net/nova/+spec/v3-api-restore-nova-network
>>> 
>>> And there is a first pass of what needs to be done here:
>>> 
>>> https://etherpad.openstack.org/p/NovaV3APINovaNetworkExtensions
>> 
>> From the etherpad:
>> 
>> "Some of the API only every supported nova-network and not neutron,
>> others supported both.
>> I think as a first pass because of limited time we just port them from
>> V2 as-is. Longer term I think
>> we should probably remove neutron back-end functionality as we
>> shouldn't be proxying, but can
>> decide that later."
>> 
>> While I like the idea of not proxying neutron, since we are taking the
>> time to create a new API we should make it clear that this API won't
>> work when neutron is being used. There have been some nova network
>> commands that pretend to work even when running neutron (quotas etc).
>> Perhaps this should be treated as a V3 extension since we don't expect
>> all deployments to run this API.
>> 
>> The user befit to proxying neutron is an API that works for both
>> nova-network and neutron. So a cloud can disable the nova-network API
>> after the neutron migration instead of  being forced to do so lockstep
>> with the migration. To continue supporting this perhaps we should see
>> if we can get neutron to implement its own copy of nova-network v3
>> API.
>> 
>> 
> So I suspect that asking neutron to support the nova-network API is a bit
> of a big ask. Although I guess it could be done fairly independently from
> the rest of the neutron code (it could I would guess sit on top of their
> api as a translation layer.
> 
> But the much simpler solution would be just to proxy for the neutron
> service only which as you say gives a better transition for user. Fully
> implementing either of these would be Juno timeframe sort of thing though.
> 
> I did read a bit of the irc log history discussion on #openstack-nova
> related to this. If I understand what was being said correctly, I do want
> to push back as hard as I can against further delaying the release of the
> V3 API in order to design a new nova-network api for the V3 API. I think
> there's always going to be something extra we could wait just one more
> cycle and at some point (which I think is now) we have to go with what we
> have.
> 
> For big API rewrites I think we can wait for V4 :-)
> 
> For the moment I'm just going ahead with doing the V2 nova-network port to
> V3 because if I wait any longer for further discussion there simply won't
> be enough time to get the patches submitted before the feature proposal
> deadline.
> 
> Chris
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140204/907181d7/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 34
> Date: Mon, 3 Feb 2014 17:33:20 -0800
> From: Joe Gordon <joe.gordon0 at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Nova] Putting nova-network support into
>        the V3  API
> Message-ID:
>        <CAHXdxOf4LCH8wtmqbv5CSneFWtsk-MPy83j+B93B8odRJjCLgA at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On Mon, Feb 3, 2014 at 4:59 PM, Christopher Yeoh <cbkyeoh at gmail.com> wrote:
>> On Tue, Feb 4, 2014 at 4:32 AM, Joe Gordon <joe.gordon0 at gmail.com> wrote:
>>> 
>>> On Thu, Jan 30, 2014 at 10:45 PM, Christopher Yeoh <cbkyeoh at gmail.com>
>>> wrote:
>>>> So with it now looking like nova-network won't go away for the forseable
>>>> future, it looks like we'll want nova-network support in the Nova V3 API
>>>> after all. I've created a blueprint for this work here:
>>>> 
>>>> https://blueprints.launchpad.net/nova/+spec/v3-api-restore-nova-network
>>>> 
>>>> And there is a first pass of what needs to be done here:
>>>> 
>>>> https://etherpad.openstack.org/p/NovaV3APINovaNetworkExtensions
>>> 
>>> From the etherpad:
>>> 
>>> "Some of the API only every supported nova-network and not neutron,
>>> others supported both.
>>> I think as a first pass because of limited time we just port them from
>>> V2 as-is. Longer term I think
>>> we should probably remove neutron back-end functionality as we
>>> shouldn't be proxying, but can
>>> decide that later."
>>> 
>>> While I like the idea of not proxying neutron, since we are taking the
>>> time to create a new API we should make it clear that this API won't
>>> work when neutron is being used. There have been some nova network
>>> commands that pretend to work even when running neutron (quotas etc).
>>> Perhaps this should be treated as a V3 extension since we don't expect
>>> all deployments to run this API.
>>> 
>>> The user befit to proxying neutron is an API that works for both
>>> nova-network and neutron. So a cloud can disable the nova-network API
>>> after the neutron migration instead of  being forced to do so lockstep
>>> with the migration. To continue supporting this perhaps we should see
>>> if we can get neutron to implement its own copy of nova-network v3
>>> API.
>>> 
>> 
>> So I suspect that asking neutron to support the nova-network API is a bit of
>> a big ask. Although I guess it could be done fairly independently from the
>> rest of the neutron code (it could I would guess sit on top of their api as
>> a translation layer.
> 
> Its unclear to me exactly how hard this would be, we may be able to
> use much of the nova code to do it. But yes I am concerned about
> asking neutron to support another API.
> 
>> 
>> But the much simpler solution would be just to proxy for the neutron service
>> only which as you say gives a better transition for user. Fully implementing
>> either of these would be Juno timeframe sort of thing though.
> 
> I'm not too keen in being a proxy for neutron, but this is definitely
> the easiest option.
> 
>> 
>> I did read a bit of the irc log history discussion on #openstack-nova
>> related to this. If I understand what was being said correctly, I do want to
>> push back as hard as I can against further delaying the release of the V3
>> API in order to design a new nova-network api for the V3 API. I think
>> there's always going to be something extra we could wait just one more cycle
>> and at some point (which I think is now) we have to go with what we have.
> 
> John and I discussed a third possibility:
> 
> nova-network v3 should be an extension, so the idea was to: Make
> nova-network API a subset of neturon (instead of them adopting our API
> we adopt theirs). And we could release v3 without nova network in
> Icehouse and add the nova-network extension in Juno.
> 
>> 
>> For big API rewrites I think we can wait for V4 :-)
> 
> Don't even joke about it. I can't imagine supporting a 3rd version now.
> 
>> 
>> For the moment I'm just going ahead with doing the V2 nova-network port to
>> V3 because if I wait any longer for further discussion there simply won't be
>> enough time to get the patches submitted before the feature proposal
>> deadline.
> 
> While I agree with this sentimental we need to make sure we get this
> right as we will have to live with the consequences for a while.
> 
>> 
>> Chris
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> ------------------------------
> 
> Message: 35
> Date: Mon, 3 Feb 2014 18:01:24 -0800
> From: Joe Gordon <joe.gordon0 at gmail.com>
> To: "Daniel P. Berrange" <berrange at redhat.com>
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] Nova style cleanups with associated
>        hacking check addition
> Message-ID:
>        <CAHXdxOecPAypWT6Se0fBrXR=HxaqzEAZbfvFsuzZO=11_fYZeQ at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On Thu, Jan 30, 2014 at 2:06 AM, Daniel P. Berrange <berrange at redhat.com> wrote:
>> On Wed, Jan 29, 2014 at 01:22:59PM -0500, Joe Gordon wrote:
>>> On Tue, Jan 28, 2014 at 4:45 AM, John Garbutt <john at johngarbutt.com> wrote:
>>>> On 27 January 2014 10:10, Daniel P. Berrange <berrange at redhat.com> wrote:
>>>>> On Fri, Jan 24, 2014 at 11:42:54AM -0500, Joe Gordon wrote:
>>>>>> On Fri, Jan 24, 2014 at 7:24 AM, Daniel P. Berrange <berrange at redhat.com>wrote:
>>>>>> 
>>>>>>> Periodically I've seen people submit big coding style cleanups to Nova
>>>>>>> code. These are typically all good ideas / beneficial, however, I have
>>>>>>> rarely (perhaps even never?) seen the changes accompanied by new hacking
>>>>>>> check rules.
>>>>>>> 
>>>>>>> The problem with not having a hacking check added *in the same commit*
>>>>>>> as the cleanup is two-fold
>>>>>>> 
>>>>>>> - No guarantee that the cleanup has actually fixed all violations
>>>>>>>   in the codebase. Have to trust the thoroughness of the submitter
>>>>>>>   or do a manual code analysis yourself as reviewer. Both suffer
>>>>>>>   from human error.
>>>>>>> 
>>>>>>> - Future patches will almost certainly re-introduce the same style
>>>>>>>   problems again and again and again and again and again and again
>>>>>>>   and again and again and again.... I could go on :-)
>>>>>>> 
>>>>>>> I don't mean to pick on one particular person, since it isn't their
>>>>>>> fault that reviewers have rarely/never encouraged people to write
>>>>>>> hacking rules, but to show one example.... The following recent change
>>>>>>> updates all the nova config parameter declarations cfg.XXXOpt(...) to
>>>>>>> ensure that the help text was consistently styled:
>>>>>>> 
>>>>>>>  https://review.openstack.org/#/c/67647/
>>>>>>> 
>>>>>>> One of the things it did was to ensure that the help text always started
>>>>>>> with a capital letter. Some of the other things it did were more subtle
>>>>>>> and hard to automate a check for, but an 'initial capital letter' rule
>>>>>>> is really straightforward.
>>>>>>> 
>>>>>>> By updating nova/hacking/checks.py to add a new rule for this, it was
>>>>>>> found that there were another 9 files which had incorrect capitalization
>>>>>>> of their config parameter help. So the hacking rule addition clearly
>>>>>>> demonstrates its value here.
>>>>>> 
>>>>>> This sounds like a rule that we should add to
>>>>>> https://github.com/openstack-dev/hacking.git.
>>>>> 
>>>>> Yep, it could well be added there. I figure rules added to Nova can
>>>>> be "upstreamed" to the shared module periodically.
>>>> 
>>>> +1
>>>> 
>>>> I worry about diverging, but I guess thats always going to happen here.
>>>> 
>>>>>>> I will concede that documentation about /how/ to write hacking checks
>>>>>>> is not entirely awesome. My current best advice is to look at how some
>>>>>>> of the existing hacking checks are done - find one that is checking
>>>>>>> something that is similar to what you need and adapt it. There are a
>>>>>>> handful of Nova specific rules in nova/hacking/checks.py, and quite a
>>>>>>> few examples in the shared repo
>>>>>>> https://github.com/openstack-dev/hacking.git
>>>>>>> see the file hacking/core.py. There's some very minimal documentation
>>>>>>> about variables your hacking check method can receive as input
>>>>>>> parameters
>>>>>>> https://github.com/jcrocholl/pep8/blob/master/docs/developer.rst
>>>>>>> 
>>>>>>> 
>>>>>>> In summary, if you are doing a global coding style cleanup in Nova for
>>>>>>> something which isn't already validated by pep8 checks, then I strongly
>>>>>>> encourage additions to nova/hacking/checks.py to validate the cleanup
>>>>>>> correctness. Obviously with some style cleanups, it will be too complex
>>>>>>> to write logic rules to reliably validate code, so this isn't a code
>>>>>>> review point that must be applied 100% of the time. Reasonable personal
>>>>>>> judgement should apply. I will try comment on any style cleanups I see
>>>>>>> where I think it is pratical to write a hacking check.
>>>>>>> 
>>>>>> 
>>>>>> I would take this even further, I don't think we should accept any style
>>>>>> cleanup patches that can be enforced with a hacking rule and aren't.
>>>>> 
>>>>> IMHO that would mostly just serve to discourage people from submitting
>>>>> style cleanup patches because it is too much stick, not enough carrot.
>>>>> Realistically for some types of style cleanup, the effort involved in
>>>>> writing a style checker that does not have unacceptable false positives
>>>>> will be too high to justify. So I think a pragmatic approach to enforcement
>>>>> is more suitable.
>>>> 
>>>> +1
>>>> 
>>>> I would love to enforce it 100% of the time, but sometimes its hard to
>>>> write the rules, but still a useful cleanup. Lets see how it goes I
>>>> guess.
>>> 
>>> I am weary of adding any new style rules that have to manually
>>> enforced by human reviewers, we already have a lot of other items to
>>> cover in a review.
>> 
>> A recent style cleanup was against config variable help strings.
>> One of the rules used was "Write complete sentances". This is a
>> perfectly reasonable style cleanup, but I challenge anyone to write
>> a hacking check that validates "Write complete sentances" in an
>> acceptable amount of code. Being pragmatic on when hacking checks
>> are needed is the only pratical approach.
> 
> Although it would be hard to write a rule to enforce complete
> sentences, looking for proper punctuation at the end of the sentence
> and a capital letter at the beginning gets us very far.
> 
>> 
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org              -o-             http://virt-manager.org :|
>> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
>> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|
> 
> 
> 
> ------------------------------
> 
> Message: 36
> Date: Tue, 4 Feb 2014 02:08:07 +0000
> From: Mark McClain <mmcclain at yahoo-inc.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] Adding package to
>        requirements.txt
> Message-ID: <BAD11C72-BB0A-44C5-A412-E210181382F2 at yahoo-inc.com>
> Content-Type: text/plain; charset="Windows-1252"
> 
> I?m interested to know why you are using urllib3 directly.  Have you considered using the requests module?  requests is built upon urllib3 and already a dependency of Neutron.
> 
> mark
> 
> On Feb 3, 2014, at 6:45 PM, Hemanth Ravi <hemanthraviml at gmail.com> wrote:
> 
>> Hi,
>> 
>> We are in the process of submitting a third party Neutron plugin that uses urllib3 for the connection pooling feature available in urllib3. httplib2 doesn't provide this capability.
>> 
>> Is it possible to add urllib3 to requirements.txt? If this is OK, please advise on the process to add this.
>> 
>> Thanks,
>> -hemanth
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ------------------------------
> 
> Message: 37
> Date: Tue, 4 Feb 2014 02:33:40 +0000
> From: "Collins, Sean" <Sean_Collins2 at cable.comcast.com>
> To: OpenStack Development Mailing List (not for usage questions)
>        ?[openstack-dev at lists.openstack.org]?
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Neutron] Developer documentation - linking
>        to      slideshares?
> Message-ID:
>        <7EB180D009B1A6428D376906754127CB2E0E8C55 at PACDCEXMB22.cable.comcast.com>
> 
> Content-Type: text/plain; charset="cp1256"
> 
> Hi,
> 
> Some Neutron developers have some really great slides from some of the summits,
> and I'd like to link them in the documentation I am building as part of a developer doc blueprint,
> with proper attribution.
> 
> https://blueprints.launchpad.net/neutron/+spec/developer-documentation
> 
> I'm hoping to add Salvatore Orlando's slides on building a plugin from scratch, as well as
> Yong Sheng Gong's deep dive slides as references in the documentation.
> 
> First - do I have permission from the previously mentioned? Second - is there any licensing that would make things complicated?
> 
> As I add more links, I will make sure to ask for permission on the mailing list. Also, if you have done a presentation and have slides that
> help explain the internals of Neutron, I would love to add it as a reference.
> 
> ---
> Sean M. Collins
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140204/43dfe4cf/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 38
> Date: Mon, 03 Feb 2014 21:39:59 -0500
> From: Russell Bryant <rbryant at redhat.com>
> To: openstack-dev at lists.openstack.org
> Subject: Re: [openstack-dev] [nova] bp proposal: configurable locked
>        vm api
> Message-ID: <52F052FF.5030001 at redhat.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On 02/03/2014 07:25 PM, Jae Sang Lee wrote:
>> Hi, Stackers.
>> 
>> The deadline for icehouse comes really quickly and I understand that there
>> are a lot of work todo, but I would like get your attention about my
>> blueprint for configurable locked vm api.
>> 
>> - https://blueprints.launchpad.net/nova/+spec/configurable-locked-vm-api
>> 
>> So far, developer places the decoration(@check_instance_lock) in the
>> function's declaration,
>> for example)
>>    @wrap_check_policy
>>    @check_instance_lock
>>    @check_instance_cell
>>    @check_instance_state(vm_state=None, task_state=None,
>>                          must_have_launched=False)
>>    def delete(self, context, instance):
>>        """Terminate an instance."""
>>        LOG.debug(_("Going to try to terminate instance"),
>> instance=instance)
>>        self._delete_instance(context, instance)
>> 
>> So good, but when administrator want to change API policy for locked vm,
>> admin must modify source code, and restart.
>> 
>> I suggest nova api do check api list for locked vm using config file
>> like policy.json. It just modify a config file, not a code
>> and don't need to service restart.
>> 
>> Can you take a small amount of time to approve a blueprint for icehouse-3?
> 
> I'm concerned about this idea from an interop perspective.  It means
> that "lock" will not mean the same thing from one cloud to another.
> That seems like something we should avoid.
> 
> One thing that might work is to do this from the API side.  We could
> allow the caller of the API to list which operations are locked.  The
> default behavior would be the current behavior of locking all
> operations.  That gives some flexibility and keeps the API call working
> the same way across clouds.
> 
> --
> Russell Bryant
> 
> 
> 
> ------------------------------
> 
> Message: 39
> Date: Mon, 3 Feb 2014 18:43:23 -0800
> From: Hemanth Ravi <hemanthraviml at gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] Adding package to
>        requirements.txt
> Message-ID:
>        <CAP3yDp3j-RiRjTBHFG94T68JLJSpuHRdFZ=5dG_JZxE_wqj27g at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Mark,
> 
> We had started the plugin dev with grizzly initially and the grizzly
> distribution included httplib2. We used urllib3 for the HTTPConnectionPool
> object. Overlooked the requests module included in the master when we
> migrated. I'll take a look at using requests for the same support.
> 
> Thanks,
> -hemanth
> 
> 
> On Mon, Feb 3, 2014 at 6:08 PM, Mark McClain <mmcclain at yahoo-inc.com> wrote:
> 
>> I'm interested to know why you are using urllib3 directly.  Have you
>> considered using the requests module?  requests is built upon urllib3 and
>> already a dependency of Neutron.
>> 
>> mark
>> 
>> On Feb 3, 2014, at 6:45 PM, Hemanth Ravi <hemanthraviml at gmail.com> wrote:
>> 
>>> Hi,
>>> 
>>> We are in the process of submitting a third party Neutron plugin that
>> uses urllib3 for the connection pooling feature available in urllib3.
>> httplib2 doesn't provide this capability.
>>> 
>>> Is it possible to add urllib3 to requirements.txt? If this is OK, please
>> advise on the process to add this.
>>> 
>>> Thanks,
>>> -hemanth
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/0408e07a/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 40
> Date: Tue, 4 Feb 2014 02:45:56 +0000
> From: "Collins, Sean" <Sean_Collins2 at cable.comcast.com>
> To: OpenStack Development Mailing List (not for usage questions)
>        ?[openstack-dev at lists.openstack.org]?
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Neutron][IPv6] Agenda for Feb 4 - 1400 UTC -
>        in      #openstack-meeting
> Message-ID:
>        <7EB180D009B1A6428D376906754127CB2E0E8C7F at PACDCEXMB22.cable.comcast.com>
> 
> Content-Type: text/plain; charset="cp1256"
> 
> Hi,
> 
> I've posted a preliminary agenda for the upcoming IPv6 meeting. See everyone soon!
> 
> https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam#Agenda_for_Feb_4th
> 
> ---
> Sean M. Collins
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140204/0834eab8/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 41
> Date: Mon, 3 Feb 2014 19:09:46 -0800
> From: Alexander Tivelkov <ativelkov at mirantis.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
>        <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Murano] Community meeting agenda -
>        02/04/2014
> Message-ID:
>        <CAM6FM9Sn_xzvSC-uO4FJEKKsG0FVSSXnE6USb3kODYi6ExFe-w at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi,
> 
> This is just a reminder that we are going to have a weekly meeting of
> Murano team in IRC (#openstack-meeting-alt) on Feb, 4 at 17:00 UTC (9am
> PST) .
> 
> The agenda can be found here:
> https://wiki.openstack.org/wiki/Meetings/MuranoAgenda#Agenda
> 
> Feel free to add anything you want to discuss.
> 
> --
> Regards,
> Alexander Tivelkov
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140203/86c33482/attachment.html>
> 
> ------------------------------
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> End of OpenStack-dev Digest, Vol 22, Issue 6
> ********************************************
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140204/867b33f3/attachment.pgp>


More information about the OpenStack-dev mailing list