[openstack-dev] [nova][newton] Austin summit nova/newton cross-project session recap

Jay Pipes jaypipes at gmail.com
Mon May 2 01:13:01 UTC 2016


Matt just a quick top-post to say thank you very much for this status 
report as well as the scheduler session status report. Really appreciate 
the help.

Best,
-jay

On 05/01/2016 09:01 PM, Matt Riedemann wrote:
> On Wednesday morning the Nova and Neutron teams got together for a
> design summit session. The full etherpad is here [1].
>
> We talked through three major items.
>
> 1. Neutron routed networks.
>
> Carl Baldwin gave a quick recap that we're on track with the Nova spec
> [2] and had pushed a new revision which addressed Dan Smith's latest
> comments. The spec is highly dependent on Jay Pipes'
> generic-resource-pools spec which needs to be rebated, and then
> hopefully we can approve that this week and the routed networks one
> shortly thereafter.
>
> We spent some time with Dan Smith sketching out his idea for moving the
> neutron network allocation code from the nova compute node to conductor.
> This would help with a few things:
>
> a) Doing the allocation earlier in the process so it's less expensive if
> we fail on the compute and get into a retry loop.
>
> b) It should clean up a bunch of the allocation code that's in the
> network API today, so we can separate the allocation logic from the
> check/update logic. This would mean that by the time we get to the
> compute the ports are already allocated and we just have to check back
> with Neutron that they are still correct and update their details. And
> that would also mean by the time we get to the compute it looks the same
> whether the user provided the port at boot time or Nova allocated it.
>
> c) Nova can update it's allocation tables before scheduling to make a
> more informed decision about where to place the instance based on what
> Neutron has already told us is available.
>
> John Garbutt is planning on working on doing this cleanup/refactor to
> move parts of the network allocation code from the compute to conductor.
> We'll most likely need a spec for this work.
>
> 2. Get Me a Network
>
> We really just talked about two items here:
>
> a) With the microversion, if the user requests 'auto' network allocation
> and there are no available networks for the project and dry-run
> validation for auto-allocated-topology fails on the Neutron side (the
> default public network and subnet pool aren't setup), we'll fail the API
> request with a 409. I had asked if we should fall back to the existing
> behavior of just not allocating networking, but we decided that it will
> be better to be explicit about a failure if you're requesting 'auto'. In
> most cases projects already have a network available to them when their
> cloud provider sets up their project, so they won't even get to the
> auto-allocated network topology code being written for the spec. But if
> not, it's a failure and not allocating networking is just...weird. Plus
> you can opt into the 'none' behavior with the microversion if that's
> what you really want.
>
> b) There were some questions about making get-me-a-network more advanced
> than the networking that is setup today (a tenant network behind a
> router). The agreement was that get-me-a-network is for the case that
> the user doesn't care, they just want networking for their instance in
> Nova. Anything that's more advanced should be pre-allocated in Neutron
> and the instance in Nova should be booted with the network/port that was
> pre-allocated in Neutron. There might be future changes/customization to
> the type of network created from the auto-allocated-topology API in
> Neutron, but that should be dealt with only in Neutron and not a concern
> of Nova.
>
> 3. Deprecating nova-network.
>
> The rest of the session was spent discussing the (re)deprecation of
> nova-network. Given the recent couple of user surveys, it's clear that
> deployments have shifted to using Neutron.
>
> We have some gaps in the Nova REST API but we can work each of those on
> a case-by-case basis. For example, we won't implement the
> os-virtual-interfaces API for Neutron. Today it returns a 400, that
> could maybe use a more appropriate error code, but it won't be changed
> to be a 200. And for the os-limits API which returns some compute and
> network resource quota limits info, we can microversion it to simply not
> return the network resources if you're using Neutron. Once we drop
> nova-network we'll update the API again to not return those network
> resources at all, you'll get them from Neutron (if you aren't already).
>
> We also decided it's not worth deprecating nova-network in pieces since
> it gets messy and something might force us to add feature parity if it's
> not deprecated outright, like cells v2.
>
> And we said it's not worth splitting it out into it's own repo since
> that has cost of it's own to maintain. If people want to fork the repo
> to keep using it, that's on them but it won't be supported by the Nova
> team once it's removed.
>
> So given the above, Sean proposed the deprecation patch [3] which by now
> is already (eagerly) approved. Note that there isn't a timetable on the
> actual removal, it could be as early as Ocata, but we need to address
> the REST API gaps and virt driver CI testing that's using it today. So
> we'll see where we're at during the midcycle and once we get to Ocata to
> see if it's possible to remove it.
>
> I have to say, given where we are now with the second attempt at
> deprecating nova-network, it was much more obvious this time around.
> This is a testament to the hard work that the Neutron team has been
> doing for the last few releases to stabilize, test, document and
> generally improve the project so that we are able to get here.
>
> [1] https://etherpad.openstack.org/p/newton-nova-neutron
> [2] https://review.openstack.org/#/c/263898/
> [3] https://review.openstack.org/#/c/310539/
>



More information about the OpenStack-dev mailing list