[openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

Carl Baldwin carl at ecbaldwin.net
Thu Nov 19 17:47:26 UTC 2015


On Mon, Nov 9, 2015 at 1:39 PM, Shraddha Pandhe
<spandhe.openstack at gmail.com> wrote:
> Thats great. L3 layer network model is definitely one of our most important
> requirements. All our go-forward deployments are going to be L3. So this is
> a big deal for us.

I think we're on a good path to getting this figured out over the next
two releases.  I plan to work out booting a VM to an IpNetwork in
Mitaka and then a few more heavy lifting migrations and API additions
will follow in the N release.

>> Solving this problem at the IPAM level has come up in discussion but I
>> don't have any references for that.  It is something that I'm still
>> considering but I haven't worked out all of the details for how this can
>> work in a portable way.  Could you describe how you imagine how this flow
>> would work from a user's perspective?  Specifically, when a user wants to
>> boot a VM, what precise API calls would be made to achieve this on your
>> network and how where would the IPAM data come in to play?
>
>
> Here's what the flow looks like to me.
>
> 1. User sends a boot request as usual. The user need not know all the
> network and subnet information beforehand. All he would do is send a boot
> request.
>
> 2. The scheduler will pick a node in an L3 rack. The way we map nodes <->
> racks is as follows:
>     a. For VMs, we store rack_id in nova.conf on compute nodes
>     b. For Ironic nodes, right now we have static IP allocation, so we
> practically know which IP we want to assign. But when we move to dynamic
> allocation, we would probably use 'chassis' or 'driver_info' fields to store
> the rack id.
>
> 3. Nova compute will try to pick a network ID for this instance.  At this
> point, it needs to know what networks (or subnets) are available in this
> rack. Based on that, it will pick a network ID and send port creation
> request to Neutron. At Yahoo, to avoid some back-and-forth, we send a fake
> network_id and let the plugin do all the work.
>
> 4. We need some information associated with the network/subnet that tells us
> what rack it belongs to. Right now, for VMs, we have that information
> embedded in physnet name. But we would like to move away from that. If we
> had a column for subnets - e.g. tag, it would solve our problem. Ideally, we
> would like a column 'rack id' or a new table 'racks' that maps to subnets,
> or something. We are open to different ideas that work for everyone. This is
> where IPAM can help.
>
> 5. We have another requirement where we want to store multiple gateway
> addresses for a subnet, just like name servers.

Do you have a more detailed description around this use case for
multiple gateways?

> We also have a requirement where we want to make scheduling decisions based
> on IP availability. We want to allocate multiple IPs to the hosts. e.g. We
> want to allocate X IPs to a host. The flow in that case would be

I will soon (today hopefully) post an update to my spec [1] outlining
how I envision the flow will be.  I don't plan to store any
information in IPAM for my use cases but the networking-calico project
may still want to pursue some enhancements to the interface to allow
this down the road.

I need to think about your request for more than one IP on the port.
This is a use case that I had not previously considered.  Thanks for
bringing it up.

Carl

[1] https://review.openstack.org/#/c/225384/

> 1. User sends a boot request with --num-ips X
>     The network/subnet level complexities need not be exposed to the user.
> For better experience, all we want our users to tell us is the number of IPs
> they want.
>
> 2. When the scheduler tries to find an appropriate host in L3 racks, we want
> it to find a rack that can satisfy this IP requirement. So, the scheduler
> will basically say, "give me all racks that have >X IPs available". If we
> have a 'Racks' table in IPAM, that would help.
>     Once the scheduler gets a rack, it will apply remaining filters to
> narrow down to one host and call nova-compute. The IP count will be
> propagated to nova compute from scheduler.
>
>
> 3. Nova compute will call Neutron and send the node details and IP count
> along. Neutron IPAM driver will then look at the node details, query the
> database to find a network in that rack and allocate X IPs from the subnet.



More information about the OpenStack-dev mailing list