[openstack-dev] [Neutron][Nova] Neutron mid-cycle summary report

Miguel Angel Ajo Pelayo majopela at redhat.com
Sat Aug 27 09:11:07 UTC 2016


Hi Armando,

Thanks for the report, I'm adding some notes inline (OSC/SDK)

On Sat, Aug 27, 2016 at 2:13 AM, Armando M. <armamig at gmail.com> wrote:
> Hi Neutrinos,
>
> For those of you who couldn't join in person, please find a few notes below
> to capture some of the highlights of the event.
>
> I would like to thank everyone one who helped me put this report together,
> and everyone who helped make this mid-cycle a fruitful one.
>
> I would also like to thank IBM, and the individual organizers who made
> everything go smoothly. In particular Martin, who put up with our moody
> requests: thanks Martin!!
>
> Feel free to reach out/add if something is unclear, incorrect or incomplete.
>
> Cheers,
> Armando
>
> ~~~~~~~
>
> We touched on these topics (as initially proposed on
> https://etherpad.openstack.org/p/newton-neutron-midcycle-workitems)
>
> Keystone v3 and project-id adoption:
>
> dasm and amotoki have been working to making the Neutron server process
> project-id correctly [1]. Looking at the spec [2], we are half way through
> having completed the DB migration, being Keystone v3 complaint, and having
> updated the client bindings [3].
>
> [1] https://review.openstack.org/#/c/357977/
> [2] https://review.openstack.org/#/c/257362/
> [3] https://review.openstack.org/#/q/topic:bp/keystone-v3
>
> Neutron-lib:
>
> HenryG, dougwig and kevinbenton worked out a plan to get the common_db_mixin
> into neutron-lib. Because of the risk of regression, this is being deferred
> until Ocata opens up. However, simpler changes like the he model_base move
> to lib was agreed on and merged.
> A plan to provide test support was discussed. The current strategy involves
> providing test base classes in lib (this reverses the stance conveyed in
> Austin). The usual steps involved require to making public the currently
> private classes, ensure the lib's copies are up-to-date with core neutron,
> and deprecate the ones located in Neutron.
> rtheis and armax worked on having networking-ovn test periodically against
> neutron-lib [1,2,3].
>
> [1] https://review.openstack.org/#/c/357086/
> [2] https://review.openstack.org/#/c/359143/
> [3] https://review.openstack.org/#/c/357079/
>
> A tool (tools/migration_report.sh) helps project team determine the level of
> dependency they have with Neutron. It should be improved to report the exact
> offending imports.
> Right now neutron-lib 0.4.0 is released and available in
> global-requirements/upper-constraints.
>
> Objects and hitless upgrades:
>
> Ihar gave the team an overview and status update [1]
> There was a fruitful discussion that hopefully set the way forward for
> Ocata. The discussed plan was to start Ocata with the expectation that no
> new contract scripts are landing in Ocata, and to revisit the requirement
> later if for some reason we see any issue with applying the requirement in
> practice.
> Some work was done to deliver necessary objects for push-notifications.
> Patches up for review. Some review cycles were spent to work on landing
> patches moving model definitions under neutron/db/models
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101838.html
>
> OSC transition:
>
> rtheis gave an update to the team on the state of the transition. Core
> resources commands are all available through OSC; QoS, Metering and *-aaS
> are still not converted.

QoS is being pushed up by Rodolfo in a series of bugs on SDK/OSC:
  https://review.openstack.org/#/q/owner:rodolfo.alonso.hernandez%2540intel.com+status:open

Those are almost there.

> There is some confusion on how to tackle openstacksdk support. We discussed
> the future goal of python binding of Networking API. OSC uses OpenStack SDK
> for network commands and Neutron OSC plugin uses python bindings from
> python-neutronclient. A question is to which project developers who add new
> features implement, both, openstack SDK or python-neutronclient? There was
> no conclusion at the mid-cycle. It is not specific to neutron. Similar
> situation can happen for nova, cinder and other projects and we need to
> raise it to the community.
>
> Ocata is going to be the first release where the neutronclient CLI is
> officially deprecated. It may take us more than the usual two cycles to
> remove it altogether, but that's a signal to developer and users to
> seriously develop against OSC, and report bugs against OSC.
> Several pending contributions into osc-lib.
> An update is available on [1,2]
>
> [1] https://review.openstack.org/#/c/357844/
> [2] https://etherpad.openstack.org/p/osc-neutron-support
>
> Stability squash:
>
> armax was bug deputy for the week of the mid-cycle; nothing critical showed
> up in the gate, however pluggable ipam [1] switch merged, which might have
> some unexpected repercussions down the road.
> A number of bugs older than a year were made expirable [2].
> kevinbenton and armax devised a strategy and started working on [3] to
> ensure DB retriable errors are no longer handled at the API layer.
> The gate witnessed phantom resets that turned out to be related to [4].
> kevinbenton, jlibosva, armax, jschwarz talked about the Neutron job queue
> configuration; it would be desirable to start gating on rally, and fullstack
> jobs but this can be done once Ocata opens up. Testing L3 HA in the
> multinode configuration is something that needs tackling too. Ideally we'd
> move away from testing legacy routing in the gate to give resources to other
> priorities.
> jlibosva should follow up with patches to make fullstack/functional job
> "bullet proof";
> armax to make a case for check queue only voting status for
> functional/rally/fullstack jobs.
>
> [1] https://review.openstack.org/#/c/181023/
> [2] https://bugs.launchpad.net/neutron/+expirable-bugs
> [3] https://review.openstack.org/#/c/356530/
> [4] https://bugs.launchpad.net/devstack/+bug/1616282
>
> Stadium wide efforts:
>
> There was a discussion around what our API definition is. Specifically,
> whether we consider supported queries part of it, and if so, if we plan to
> enforce this aspect in the future. Code inspection showed that it’s hard to
> determine query parameters supported by plugins right now. We will look into
> non-invasive ways to collect this data in Ocata. Armax to follow up on [1],
> which he is using to break ground.
> API reference clean up sprint has been announced and started. The guideline
> on the API reference was prepared [2]. Some lead-by-example patches have
> been posted and merged [3].
>
> [1] https://review.openstack.org/#/c/353131/
> [2] https://etherpad.openstack.org/p/neutron-api-ref-sprint
> [3] https://review.openstack.org/#/c/350857/
>
> Nova/Neutron integration
>
> armax, johnthetubaguy and carl_baldin participated in a couple of
> discussions about live migration and multiple port binding. There is a
> common understanding of the problem and how to improve the resiliency of the
> boot/live migration operation. Some thoughts are being captured on [1,2].
> The plan is to ensure all the legwork is complete ahead of the summit so
> that the team can switch into execution mode during the Ocata timeframe.
> There are areas where Nova talks to Neutron that can be improved and the
> team identified/discussed a few, the periodic polls being an example.
> Providing the ability to aggregate more data in a single API was also
> contemplated.
> armax, johnthetubaguy, ihrachys talked about next steps in order to allow
> os-vif to access a versioned object of vif details as stored by Neutron.
> carl_baldwin was able to make some progress on [3] to abort build if Nova
> tries to schedule a vm to a segment where the IP is not usable.
>
> [1] https://review.openstack.org/#/c/309416/
> [2] https://review.openstack.org/#/c/353982/
> [3] https://review.openstack.org/#/c/346278/
>
> Newton blueprints (in no particular order):
>
> push notifications: discussed future steps on how to implement compare and
> swap updates to reduce some of the races experienced when dealing with the
> Neutron API.
> service subnets:
>
> Spent some time tying up some final loose ends on the service subnets
> capability. Patch [1] was the final one of the effort. Need to get the
> release note written and merged it.
>
> [1] https://review.openstack.org/#/c/350613
>
> routed networks:
>
> mlavalle implemented a Generic Resource Pools client [1] leveraged in [2].
> carl_baldwin and kevinbenton had a very productive discussion about how to
> add standard attributes to the networksegments table to complete its
> transition to being a first class API resource. kevinbenton was able to
> provide an example and we're working to wrap that up now [3].
> carl_baldwin discussed the patch [4] to enable creating and deleting
> segments with the ML2 plugin with kevinbenton. This may be the last hurdle
> for routed networks. Wrapping this up before feature freeze would be of
> great help.
>
> [1]
> https://github.com/miguellavalle/python-placementclient/tree/adding-grp-support
> [2] https://review.openstack.org/#/c/358658/
> [3] https://review.openstack.org/#/c/293305
> [4] https://review.openstack.org/#/c/317358
>
> vlan-aware-vms:
>
> jlibosva and armax talked about how to deal with failures during OVS trunk
> setup. That resulted in jlibosva working on on OVSDB transaction manager to
> handle nested transaction.
> armax and tidwellr are working on completing the trunk state machine,
> kevinbenton is rebasing/working on the linuxbridge driver
> OVN driver coming along as well.
> https://review.openstack.org/#/q/status:open+topic:bp/vlan-aware-vms
>
> Resource status:
>
> Many Neutron resources have status/admin_state_up attributes, however their
> meaning is not consistent across. This makes difficult to address use cases
> like [1]. We should start painting a picture of what exactly each attribute
> mean for each resource in order to make sure we can approach to cases like
> [1] more informed.
>
> [1] https://review.openstack.org/#/c/351675/
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list