<div dir="ltr">Very good summary, thanks for leading the PTG and neutron so well. :)<br> <br><div><br><div class="gmail_quote"><div dir="ltr">On Mon, Mar 12, 2018 at 11:25 PM fumihiko kakuma <<a href="mailto:kakuma@valinux.co.jp">kakuma@valinux.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Miguel,<br>
<br>
> * As part of the neutron-lib effort, we have found networking projects that<br>
> are very inactive. Examples are networking-brocade (no updates since May of<br>
> 2016) and networking-ofagent (no updates since March of 2017). Miguel<br>
> Lavalle will contact these projects leads to ascertain their situation. If<br>
> they are indeed inactive, we will not support them as part of neutron-lib<br>
> updates and will also try to remove them from code search<br>
<br>
networking-ofagent has been removed in the Newton release.<br>
So it will not be necessary to support it as part of neutron-lib updates.<br>
<br>
Thanks<br>
kakuma.<br>
<br>
<br>
On Mon, 12 Mar 2018 13:45:27 -0500<br>
Miguel Lavalle <<a href="mailto:miguel@mlavalle.com" target="_blank">miguel@mlavalle.com</a>> wrote:<br>
<br>
> Hi All!<br>
><br>
> First of all, I want to thank you the team for the productive week we had<br>
> in Dublin. Following below is a high level summary of the discussions we<br>
> had. If there is something I left out, please reply to this email thread to<br>
> add it. However, if you want to continue the discussion on any of the<br>
> individual points summarized below, please start a new thread, so we don't<br>
> have a lot of conversations going on attached to this update.<br>
><br>
> You can find the etherpad we used during the PTG meetings here:<br>
> <a href="https://etherpad.openstack.org/p/neutron-ptg-rocky" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/neutron-ptg-rocky</a><br>
><br>
><br>
> Retrospective<br>
> ==========<br>
><br>
> * The team missed one community goal in the Pike cycle (<br>
> <a href="https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html" rel="noreferrer" target="_blank">https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html</a>) and<br>
> one in the Queens cycle (<a href="https://governance.openstack" rel="noreferrer" target="_blank">https://governance.openstack</a>.<br>
> org/tc/goals/queens/policy-in-code.html)<br>
><br>
> - Akihiro Motoki will work on <a href="https://governance.openstack.o" rel="noreferrer" target="_blank">https://governance.openstack.o</a><br>
> rg/tc/goals/queens/policy-in-code.html during Rocky<br>
><br>
> - We need volunteers to complete <a href="https://governance.op" rel="noreferrer" target="_blank">https://governance.op</a><br>
> <a href="http://enstack.org/tc/goals/pike/deploy-api-in-wsgi.html" rel="noreferrer" target="_blank">enstack.org/tc/goals/pike/deploy-api-in-wsgi.html</a>) and the two new goals<br>
> for the Rocky cycle: <a href="https://governance.openstack.o" rel="noreferrer" target="_blank">https://governance.openstack.o</a><br>
> rg/tc/goals/rocky/enable-mutable-configuration.html and<br>
> <a href="https://governance.openstack.org/tc/goals/rocky/mox_removal.html" rel="noreferrer" target="_blank">https://governance.openstack.org/tc/goals/rocky/mox_removal.html</a>. Akihiro<br>
> Motoki will lead the effort for mox removal<br>
><br>
> - We decided to add a section to our weekly meeting agenda where we are<br>
> going to track the progress towards catching up with the community goals<br>
> during the Rocky cycle<br>
><br>
> * As part of the neutron-lib effort, we have found networking projects that<br>
> are very inactive. Examples are networking-brocade (no updates since May of<br>
> 2016) and networking-ofagent (no updates since March of 2017). Miguel<br>
> Lavalle will contact these projects leads to ascertain their situation. If<br>
> they are indeed inactive, we will not support them as part of neutron-lib<br>
> updates and will also try to remove them from code search<br>
><br>
> * We will continue our efforts to recruit new contributors and develop core<br>
> reviewers. During the conversation on this topic, Nikolai de Figueiredo and<br>
> Pawel Suder announced that they will become active in Neutron. Both of<br>
> them, along with Hongbin Lu, indicated that are interested in working<br>
> towards becoming core reviewers.<br>
><br>
> * The team went through the blueprints in the backlog. Here is the status<br>
> for those blueprints that are not discussed in other sections of this<br>
> summary:<br>
><br>
> - Adopt oslo.versionedobjects for database interactions. This is a<br>
> continuing effort. The contact is Ihar Hrachyshka (ihrachys). Contributors<br>
> are wanted. There is a weekly meeting led by Ihar where this topic is<br>
> covered: <a href="http://eavesdrop.openstack.org/#Neutron_Upgrades_Meeting" rel="noreferrer" target="_blank">http://eavesdrop.openstack.org/#Neutron_Upgrades_Meeting</a><br>
><br>
> - Enable adoption of an existing subnet into a subnetpool. The final<br>
> patch in the series to implement this feature is:<br>
> <a href="https://review.openstack.org/#/c/348080" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/348080</a>. Pawel Suder will drive this patch<br>
> to completion<br>
><br>
> - Neutron in-tree API reference (<a href="https://blueprints.launchpad" rel="noreferrer" target="_blank">https://blueprints.launchpad</a>.<br>
> net/neutron/+spec/neutron-in-tree-api-ref). There are two remaining TODOs<br>
> to complete this blueprint: <a href="https://bugs.launchpad.net/neutron/+bug/1752274" rel="noreferrer" target="_blank">https://bugs.launchpad.net/neutron/+bug/1752274</a><br>
> and <a href="https://bugs.launchpad.net/neutron/+bug/1752275" rel="noreferrer" target="_blank">https://bugs.launchpad.net/neutron/+bug/1752275</a>. We need volunteers for<br>
> these two work items<br>
><br>
> - Add TCP/UDP port forwarding extension to L3. The spec was merged<br>
> recently: <a href="https://specs.openstack.org/openstack/neutron-specs/specs/qu" rel="noreferrer" target="_blank">https://specs.openstack.org/openstack/neutron-specs/specs/qu</a><br>
> eens/port-forwarding.html. Implementation effort is in progress:<br>
> <a href="https://review.openstack.org/#/c/533850/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/533850/</a> and <a href="https://review.openstack.org/#" rel="noreferrer" target="_blank">https://review.openstack.org/#</a><br>
> /c/535647/<br>
><br>
> - Pure Python driven Linux network configuration (<br>
> <a href="https://bugs.launchpad.net/neutron/+bug/1492714" rel="noreferrer" target="_blank">https://bugs.launchpad.net/neutron/+bug/1492714</a>). This effort has been<br>
> going on for several cycles gradually adopting pyroute2. Slawek Kaplonski<br>
> is continuing it with <a href="https://review.openstack.org/#/c/545355" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/545355</a> and<br>
> <a href="https://review.openstack.org/#/c/548267" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/548267</a><br>
><br>
><br>
> Port behind port API proposal<br>
> ======================<br>
><br>
> * Omer Anson proposed to extend the Trunk Port API to generalize the<br>
> support for port behind port use cases such as containers nested as<br>
> MACVLANs within a VM or HA proxy port behind amphora VM port:<br>
> <a href="https://bugs.launchpad.net/bugs/1730845" rel="noreferrer" target="_blank">https://bugs.launchpad.net/bugs/1730845</a><br>
><br>
> - After discussing the proposed use cases, the agreement was to develop<br>
> a specification making sure input is provided by the Kuryr and Octavia teams<br>
><br>
><br>
> ML2 and Mechanism drivers<br>
> =====================<br>
><br>
> * Hongbin Lu presented a proposal (<a href="https://bugs.launchpad.net/ne" rel="noreferrer" target="_blank">https://bugs.launchpad.net/ne</a><br>
> utron/+bug/1722720) to add a new value "auto" to the port attribute<br>
> admin_state_up.<br>
><br>
> - This is to support SR-IOV ports, where admin_state_up == "auto" would<br>
> mean that the VF link state follows that of the PF. This may be useful when<br>
> VMs use the link as a trigger for its own HA mechanism<br>
> - The agreement was not to overload the admin_state_up attribute with<br>
> more values, since it reflects the desired administrative state of the port<br>
> and add a new attribute for the intended purpose<br>
><br>
> * Zhang Yanxian presented a specification (<a href="https://review.openstack.org/" rel="noreferrer" target="_blank">https://review.openstack.org/</a><br>
> 506066) to support SR-IOV bonds whereby a Neutron port is associated with<br>
> two VFs in separate PFs. This is useful in NFV scenarios, where link<br>
> redundancy is necessary.<br>
><br>
> - Nikolai de Figueiredo agreed to help to drive this effort forward,<br>
> starting with the specification both in the Neutron and the Nova sides<br>
> - Sam Betts indicated this type of bond is also of interest for Ironic.<br>
> He requested to be kept in the loop<br>
><br>
> * Ruijing Guo proposed to support VLAN transparency in Neutron OVS agent.<br>
><br>
> - There is a previous incomplete effort to provide this support:<br>
> <a href="https://bugs.launchpad.net/neutron/+bug/1705719" rel="noreferrer" target="_blank">https://bugs.launchpad.net/neutron/+bug/1705719</a>. Patches are here:<br>
> <a href="https://review.openstack.org/#/q/project:openstack/neutron+topic:bug/1705719" rel="noreferrer" target="_blank">https://review.openstack.org/#/q/project:openstack/neutron+topic:bug/1705719</a><br>
> - Agreement was for Ruijing to look at the existing patches to re-start<br>
> the effort. Thomas Morin may provide help for this<br>
> - While on this topic, the conversation temporarily forked to the use of<br>
> registers instead of ovsdb port tags in L2 agent br-int and possibly remove<br>
> br-tun. Thomas Morin committed to draft a RFE for this.<br>
><br>
> * Mike Kolesnik, Omer Anson, Irena Berezovsky, Takashi Yamamoto, Lucas<br>
> Alvares, Ricardo Noriega, Miguel Ajo, Isaku Yamahata presented the proposal<br>
> to implement a common mechanism to achieve synchronization between<br>
> Neutron's DB and the DBs of sub-projects / SDN frameworks<br>
><br>
> - Currently each sub-project / SDN framework has its own solution for<br>
> this problem. The group thinks that a common solution can be achieved<br>
> - The agreement was to create a specification where the common solution<br>
> can be fleshed out<br>
> - The synchronization mechanism will exist in Neutron<br>
><br>
> * Mike Kolesnik (networking-odl) requested feedback from members of other<br>
> Neutron sub-projects about the value of inheriting ML2 Neutron's unit tests<br>
> to get "free testing" for mechanism drivers<br>
><br>
> - The conclusion was that there is no value in that practice for the<br>
> sub-rpojects<br>
> - Sam Betts and Miguel Lavalle will explore moving unit tests utils to<br>
> neutron-lib to enable subprojects to create their own base classes<br>
> - Mike Kolesnik will document a guideline for sub-projects not to<br>
> inherit unit tests from Neutron<br>
><br>
><br>
> API topics<br>
> ========<br>
><br>
> * Isaku Yamahata presented a proposal of a new API for cloud admins to<br>
> retrieve the physical networks configured in compute hosts<br>
><br>
> - This information is currently stored in configuration files. In<br>
> agent-less environments it is difficult to retrieve<br>
> - The agreement was to extend the agent API to expose the physnet as a<br>
> standard attribute. This will be fed by a pseudo-agent<br>
><br>
> * Isaku Yamahata presented a proposal of a new API to report mechanism<br>
> drivers health<br>
><br>
> - The overall idea is to report mechanism driver status, similar to the<br>
> agents API which reports agent health. In the case of mechanism drivers<br>
> API, it would report connectivity to backend SDN controller or MQ server<br>
> and report its health/config periodically<br>
> - Thomas Morin pointed out that this is relevant not only for ML2<br>
> mechanism drivers but also for all drivers of different services<br>
> - The agreement was to start with a specification where we scope the<br>
> proposal into something manageable for implementation<br>
><br>
> * Yushiro Furukawa proposed to add support of 'snat' as a loggable resource<br>
> type: <a href="https://bugs.launchpad.net/neutron/+bug/1752290" rel="noreferrer" target="_blank">https://bugs.launchpad.net/neutron/+bug/1752290</a><br>
><br>
> - The agreement was to implement it in Rocky<br>
> - Brian Haley agreed to be the approver<br>
><br>
> * Hongbin Lu indicated that If users provide different kinds of invalid<br>
> query parameters, the behavior of the Neutron API looks unpredictable (<br>
> <a href="https://bugs.launchpad.net/neutron/+bug/1749820" rel="noreferrer" target="_blank">https://bugs.launchpad.net/neutron/+bug/1749820</a>)<br>
><br>
> - The proposal is to improve the predictability of the Neutron API by<br>
> handling invalid query parameters consistently<br>
> - The proposal was accepted. It will need to provide API discoverability<br>
> when behavior changes on filter parameter validation<br>
> - It was also recommended to discuss this with the API SIG to get their<br>
> guidance. The discussion already started in the mailing list:<br>
> <a href="http://lists.openstack.org/pipermail/openstack-dev/2018-March/128021.html" rel="noreferrer" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/2018-March/128021.html</a><br>
><br>
><br>
> Openflow Manager and Common Classification Framework<br>
> ==========================================<br>
><br>
> * The Openflow manager implementation needs reviews to continue making<br>
> progress<br>
><br>
> - The approved spec is here: <a href="https://specs.openstack.org/op" rel="noreferrer" target="_blank">https://specs.openstack.org/op</a><br>
> enstack/neutron-specs/specs/backlog/pike/l2-extension-ovs-fl<br>
> ow-management.html<br>
> - The code is here: <a href="https://review.openstack.org/323963" rel="noreferrer" target="_blank">https://review.openstack.org/323963</a><br>
> - Thomas Morin, David Shaughnessy and Miguel Lavalle discussed and<br>
> reviewed the implementation during the last day of the PTG. The result of<br>
> that conversation was reflected in the patch. Thomas and Miguel committed<br>
> to continue reviewing the patch<br>
><br>
> * The Common Classification Framework (<a href="https://specs.openstack.org/o" rel="noreferrer" target="_blank">https://specs.openstack.org/o</a><br>
> penstack/neutron-specs/specs/pike/common-classification-framework.html)<br>
> needs to be adopted by its potential consumers: QoS, SFC, FWaaS<br>
><br>
> - David Shaughnessy and Miguel Lavalle met with Slawek Kaplonski over<br>
> IRC the last day of the PTG (<a href="http://eavesdrop.openstack.or" rel="noreferrer" target="_blank">http://eavesdrop.openstack.or</a><br>
> g/irclogs/%23openstack-neutron/%23openstack-neutron.2018-03-<br>
> 02.log.html#t2018-03-02T12:00:34) to discuss the adoption of the framework<br>
> in QoS code. The agreement was to have a PoC for the DSCP marking rule,<br>
> since it uses OpenFlow and wouldn't involve big backend changes<br>
><br>
> - David Shaughnessy and Yushiro Furukawa are going to meet to discuss<br>
> adoption of the framework in FWaaS<br>
><br>
><br>
> Neutron to Neutron interconnection<br>
> =========================<br>
><br>
> * Thomas Morin walked the team through an overview of his proposal (<br>
> <a href="https://review.openstack.org/#/c/545826" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/545826</a>) for Neutron to Neutron<br>
> interconnection, whereby the following requirements are satisfied:<br>
><br>
> - Interconnection is consumable on-demand, without admin intervention<br>
> - Have network isolation and allow the use of private IP addressing end<br>
> to end<br>
> - Avoid the overhead of packet encryption<br>
><br>
> * Feedback was positive and the agreement is to continue developing and<br>
> reviewing the specification<br>
><br>
><br>
> L3 and L3 flavors<br>
> ============<br>
><br>
> * Isaku Yamahata shared with the team that the implementation of routers<br>
> using the L3 flavors framework gives rise to the need of specifying the<br>
> order in which callbacks are executed in response to events<br>
><br>
> - Over the past couple of months several alternatives have been<br>
> considered: callback cascading among resources, SQLAlchemy events,<br>
> assigning priorities to callbacks responding to the same event<br>
> - The agreement was an approach based on assigning a priority structure<br>
> to callbacks in neutron-lib: <a href="https://review.openstack.org/#/c/541766" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/541766</a><br>
><br>
> * Isaku Yamahata shared with the team the progress made with the PoC for an<br>
> Openflow based DVR: <a href="https://review.openstack.org/#/c/472289/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/472289/</a> and<br>
> <a href="https://review.openstack.org/#/c/528336/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/528336/</a><br>
><br>
> - There was a discussion on whether we need to ask the OVS community to<br>
> do ipv6 modification to support this PoC. The conclusion was that the<br>
> feature already exists<br>
> - There was also an agreement for David Chou add Tempest testing for the<br>
> scenario of mixed agents<br>
><br>
><br>
> neutron-lib<br>
> ========<br>
><br>
> * The team reviewed two neutron-lib specs, providing feedback through<br>
> Gerrit:<br>
><br>
> - A spec to rehome db api and utils into neutron-lib:<br>
> <a href="https://review.openstack.org/#/c/473531" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/473531</a>.<br>
> - A spec to decouple neutron db models and ovo for neutron-lib:<br>
> <a href="https://review.openstack.org/#/c/509564/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/509564/</a>. There is agreement from Ihar<br>
> Ihrachys that OVO base classes should go into neutron-lib. But he asked not<br>
> to move yet neutron.objects.db.api since it's still in flux<br>
><br>
> * Manjeet Singh Bhatia proposed making payload consistent for all the<br>
> callbacks so all the operations of an object get same type of payload. (<br>
> <a href="https://bugs.launchpad.net/neutron/+bug/1747747" rel="noreferrer" target="_blank">https://bugs.launchpad.net/neutron/+bug/1747747</a>)<br>
><br>
> - The agreement was for Manjeet to document all the instances in the<br>
> code where this is happening so he and others can work on making the<br>
> payloads consistent<br>
><br>
><br>
> Proposal to migrate neutronclient python bindings to OpenStack SDK<br>
> ==================================================<br>
><br>
> * Akihiro Motoki proposed to change the first priority of neutron-related<br>
> python binding to OpenStack SDK rather than neutronclient python bindings,<br>
> given that OpenStack SDK became official in Queens (<br>
> <a href="http://lists.openstack.org/pipermail/openstack-dev/2018-February/127726.html" rel="noreferrer" target="_blank">http://lists.openstack.org/pipermail/openstack-dev/2018-February/127726.html</a><br>
> )<br>
><br>
> - The proposal is to implement all Neutron features in OpenStack SDK as<br>
> the first citizen and the neutronclient OSC plugin consumes corresponding<br>
> OpenStack SDK APIs<br>
> - New features should be supported in OpenStack SDK and<br>
> OSC/neutronclient OSC plugin as the first priority<br>
> - If a new feature depends on neutronclient python bindings, it can be<br>
> implemented in neutornclient python bindings first and they are ported as<br>
> part of existing feature transition<br>
> - Existing features only supported in neutronclient python bindings are<br>
> ported into OpenStack SDK, and neutronclient OSC plugin will consume them<br>
> once they are implemented in OpenStack SDK<br>
> - There is no plan to drop the neutronclient python bindings since not a<br>
> small number of projects consumes it. It will be maintained as-is<br>
> - Projects like Nova that consume a small set of neutron features can<br>
> continue using neutronclient python bindings. Projects like Horizon or Heat<br>
> that would like to support a wide range of features might be better off<br>
> switching to OpenStack SDK<br>
> - Proposal was accepted<br>
><br>
><br>
> Cross project planning with Nova<br>
> ========================<br>
><br>
> * Minimum bandwidth support in the Nova scheduler. The summary of the<br>
> outcome of the discussion and further work done after the PTG is the<br>
> following:<br>
><br>
> - Minimum bandwidth support guarantees a port minimum bandwidth. Strict<br>
> minimum bandwidth support requires cooperation with the Nova scheduler, to<br>
> avoid physical interfaces bandwidth overcommitment<br>
> - Neutron will create in each host networking RPs (Resource Providers)<br>
> under the compute RP with proper traits and then will report resource<br>
> inventories based on the discovered and / or configured resource inventory<br>
> in the host<br>
> - The hostname will be used by Neutron to find the compute RP created by<br>
> Nova for the compute host. This convention can create ambiguity in<br>
> deployments with multiple cells, where hostnames may not be unique. However<br>
> this problem is not exclusive to this effort, so its solution will be<br>
> considered out of scope<br>
> - Two new standard Resource Classes will be defined to represent the<br>
> bandwidth in each direction, named as `NET_BANDWIDTH_INGRESS_BITS_SEC` and<br>
> `NET_BANDWIDTH_EGRESS_BITS_SEC<br>
> - New traits will be defined to distinguish a network back-end agent:<br>
> `NET_AGENT_SRIOV`, `NET_AGENT_OVS`. Also new traits will be used to<br>
> indicate which physical network a given Network RP is connected to<br>
> - Neutron will express a port's bandwidth needs through the port API in<br>
> a new attribute named "resource_request" that will include ingress<br>
> bandwidth, egress bandwidth, the physical net and the agent type<br>
> - The first implementation of this feature will support server create<br>
> with pre-created Neutron ports having QoS policy with minimum bandwidth<br>
> rules. Server create with networks having QoS policy minimum bandwidth rule<br>
> will be out of scope of the first implementation, because currently, in<br>
> this case, the corresponding port creations happen after the scheduling<br>
> decision has been made<br>
> - For the first implementation, Neutron should reject a QoS minimum<br>
> bandwidth policy rule created on a bound port<br>
> - The following cases don't involve any interaction in Nova and as a<br>
> consequence, Neutron will have to adjust the resource allocations: QoS<br>
> policy rule bandwidth amount change on a bound port and QoS aware sub port<br>
> create under a bound parent port<br>
> - For more detailed discussion, please go to the following specs:<br>
> <a href="https://review.openstack.org/#/c/502306" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/502306</a> and <a href="https://review.openstack.org/#" rel="noreferrer" target="_blank">https://review.openstack.org/#</a><br>
> /c/508149<br>
><br>
> * Provide Port Binding Information for Nova Live Migration (<br>
> <a href="https://specs.openstack.org/openstack/neutron-specs/specs/" rel="noreferrer" target="_blank">https://specs.openstack.org/openstack/neutron-specs/specs/</a><br>
> backlog/pike/portbinding_information_for_nova.html and<br>
> <a href="https://specs.openstack.org/openstack/nova-specs/specs/" rel="noreferrer" target="_blank">https://specs.openstack.org/openstack/nova-specs/specs/</a><br>
> queens/approved/neutron-new-port-binding-api.html).<br>
><br>
> - There was no discussion around this topic<br>
> - There was only an update to both teams about the solid progress that<br>
> has been made on both sides: <a href="https://review.openstack.org/#/c/414251/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/414251/</a> and<br>
> <a href="https://review.openstack.org/#/q/status:open+project" rel="noreferrer" target="_blank">https://review.openstack.org/#/q/status:open+project</a>:<br>
> openstack/nova+branch:master+topic:bp/neutron-new-port-binding-api<br>
> - The plan is to finish this in Rocky<br>
><br>
> * NUMA aware switches <a href="https://review.openstack.org/#/c/541290/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/541290/</a><br>
><br>
> - The agreement on this topic was to do this during Rocky entirely in<br>
> Nova using a config option which is a list of JSON blobs<br>
><br>
> * Miguel Lavalle and Hongbin Lu proposed to add device_id of the associated<br>
> port to the floating IP resource<br>
><br>
> - The use case is to allow Nova to filter instances by floating IPs<br>
> - The agreement was that this would be adding an entirely new contract<br>
> to Nova with new query parameters. This will not be implemented in Nova,<br>
> especially since the use case can already be fulfilled by making 3 API<br>
> calls in a client: find floating IP via filter (Neutron), use that to<br>
> filter port to get the device_id (Neutron), use that to get the server<br>
> (Nova)<br>
><br>
><br>
> Team photos<br>
> =========<br>
><br>
> * Thanks to Kendall Nelson, the official PTG team photos can be found here:<br>
> <a href="https://www.dropbox.com/sh/dtei3ovfi7z74vo/AABT7UR5el6iXRx5WihkbOB3a/" rel="noreferrer" target="_blank">https://www.dropbox.com/sh/dtei3ovfi7z74vo/AABT7UR5el6iXRx5WihkbOB3a/</a><br>
> Neutron?dl=0<br>
><br>
> * Thanks to Nikolai de Figueiredo for sharing with us pictures of our team<br>
> dinner. Please find a couple of them attached to this message<br>
<br>
--<br>
fumihiko kakuma <<a href="mailto:kakuma@valinux.co.jp" target="_blank">kakuma@valinux.co.jp</a>><br>
<br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div></div></div>