On 04/20/2016 06:40 PM, Matt Riedemann wrote: > Note that I think the only time Nova gets details about ports in the API > during a server create request is when doing the network request > validation, and that's only if there is a fixed IP address or specific > port(s) in the request, otherwise Nova just gets the networks. [1] > > [1] > https://github.com/openstack/nova/blob/ee7a01982611cdf8012a308fa49722146c51497f/nova/network/neutronv2/api.py#L1123 Actually, nova.network.neutronv3.api.API.allocate_for_instance() is *never* called by the Compute API service (though, strangely, deallocate_for_instance() *is* called by the Compute API service. allocate_for_instance() is *only* ever called in the nova-compute service: https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/compute/manager.py#L1388 I was actually on a hangout today with Carl, Miguel and Dan Smith talking about just this particular section of code with regards to routed networks IPAM handling. What I believe we'd like to do is move to a model where we call out to Neutron here in the conductor: https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L397 and ask Neutron to give us as much information about available subnet allocation pools and segment IDs as it can *before* we end up calling the scheduler here: https://github.com/openstack/nova/blob/7be945b53944a44b26e49892e8a685815bf0cacb/nova/conductor/manager.py#L415 Not only will the segment IDs allow us to more properly use network affinity in placement decisions, but doing this kind of "probing" for network information in the conductor is inherently more scalable than doing this all in allocate_for_instance() on the compute node while holding the giant COMPUTE_NODE_SEMAPHORE lock. Best, -jay