[openstack-dev] [heat] heat PTG recap

Rico Lin rico.lin.guanyu at gmail.com
Tue Mar 7 02:41:00 UTC 2017


Hi everyone,

Heat team gathered at the PTG from 2/22-2/24

Here is some discussion that we have targeted during PTG:
https://etherpad.openstack.org/p/heat-pike-ptg-sessions

Targeted tasks or discussions:
* We have reached an agreement with release team about the stable mint list
management for Heat will remain on stable mint cores. Members who give
enough review to stable releases patches will have more chance to be
promoted as Stable mint for heat. All above information is now in the
official policy from release team.

* We need Python 3 support, This is the community width goal. We will have
to make sure all heat's repo has reached this requirement. There already
landed some patches for Python 35 support (see
https://etherpad.openstack.org/p/pike-heat-ptg-python3 ).
* We have to collaborate with Interop team to define tests(API and scenario
tests) for heat for all can define what is heat.

* We agree with heat should have an interface for resources for any
resource plugins. So the resources won't directly use inner methods.
* For Convergence 2.0+ required, we need a notification system for heat
resources. Also, we have talked about if no volunteer for convergence
doc(we decide to do in Ocata release), we will postpone our doc plan for
convergence.
For above two tasks, you can find reference here
https://etherpad.openstack.org/p/pike-heat-ptg-convergence2

* For convergence adoption, we might still require some memory improvement
for the tripleO project to adopt convergence mode.

* Feedback from Sahara and Magnum team, when a lot of resources been
deleted (like stack-delete action), we might throw a huge number of API
calls in a very short period (For example, to Cinder) and course some
service overload situation. This might be one issue we can try to help to
make some more friendly API calling schedule to other services.
* Feedback from user survey,
What's the current/expected load on your Heat deployment?
Few big stacks (>100 resources each) = 6 (9%)
Few small stacks (<100 resources each) = 50 (78%)
Lots of big stacks (>100 resources each) = 1
Lots of small stacks (<100 resources each) = 7 (10%)
( You can find the reference here
https://etherpad.openstack.org/p/pike-cp-ptg-orchestration-feedback )

* Feedback from TripleO team and Magnum team asking the possibility for
Heat to adopt Jinja2. As we do not recommend to use heat for deep layer
resource structure (A flat structure might work better.), we will consider
adding Jinja once more detail from other projects about how they might
using it (make sure we reach the target requirements.)
* Also, we have some interest use case that combining Heat and Mistral
(and/or Jinja) from TripleO team, would like to trace them with the entire
use case, and hope we can get complete use case and share out to any other
projects who might get benefit from it.
(Some reference you can see in
https://etherpad.openstack.org/p/pike-cp-ptg-orchestration-integrate )

* Identity trust and federate still not working for Heat (and some other
services). we have to make an announcement about heat user should not use
federation until keystone fix it. (see
https://etherpad.openstack.org/p/keystone-pike-ptg)

For specs, we're consider fallowing actions:
We have obsolete some very old PB (feel free to raise any discussion if you
have some very important BP been obsoleted).
Also lower the Blueprint priority, for V2 API, we still mark v2(or maybe we
should call v1.1) API as the next thing we need to do, but seems should not
settle down during Pike cycle. For more other actions please reference here
https://etherpad.openstack.org/p/pike-heat-ptg-track-and-design

We also spend some more time with reviewing patches, which all listed in
Etherpads, so I'm not going to list all of it here.
Feel free to raise further discussion for any topics, we can always discuss
anything in detail through the entire cycle.


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170307/abbb78bf/attachment.html>


More information about the OpenStack-dev mailing list