Okay.. creating multiple fixed IPs on the port create call works just fine. That definitely makes device level addressing feasible because each device will have its own port (MAC address). Easy enough.... <div><br><div>Need some validation from the quantum L2 folk... Quantum IPAM isn't really IPAM it's 'IPAM bound to L2 networking', which may come with some assumptions we aren't thinking about..</div>
<div><br></div><div>So if we are to define a VIP which needs an L3 address within a subnet created on a tenant quantum network (making allocation_pools not something you can decide before you need them), we will need to attach it to a port (hence a MAC address) in order for quantum to see the L3 address as 'allocated'. If it is not 'allocated', other tenant generated devices ( nova vm or otherwise ) could have the same VIP L3 address 'allocated' to them by quantum. As long as a fixed_ip is managed by quantum, it must be 'tied' to a port (thus a MAC address) to avoid it being assigned otherwise. I had not expected that.</div>
<div><br></div><div>Just to cover our bases... My question is: Does assigning a fixed IP to a port (MAC address) necessarily dictate anything for quantum L2 networking? If so, what does the mean for fixed_ip mobility and HA? Did we just bind L2 to L3 in quantum and make it a necessary update for all our HA mechanisms? I hope not.. </div>
<div><br></div><div>What happens when the LBaaS device's HA mechanism decides that a VIP L3 address is no longer 'bound' to the MAC address of the quantum port and needs to move it to a different MAC address. Will that screw up anything at L2 for quantum now that they are tied together in its data model? </div>
<div><br></div><div>If it helps, we can mandate the use of MAC masquerading (with either quantum generated or LBaaS generated MAC address ..doesn't matter) for the quantum ports we create for LBaaS devices. That works. In an HA event, frames from the port's MAC address can start coming from any device in the LBaaS cluster. The assumption is that whatever LBaaS device takes over the port MAC address and sends out GARPs for the fixed_ips , the associated quantum L2 plugin devices will clean up themselves and will forward that port's traffic to it's new home. This is how physical L2 switching without L3 bindings work today, but in my testing I am only talking to one ovs.. so of course that works. I didn't see anything in quantum 'dictating' L2 HA behavior had to work, but I am counting on it. It's an assumption. If I had not bound my L3 address to L2 MACs in quantum just to get some addressing allocated I would be feeling better about the assumption.</div>
<div><br></div><div>NOTE: Quantum ports have a device_id which for nova vms gets set to the vNIC (sorry I used the term vif earlier..) generated id, but which is not required, nor validated by quantum. I assume that for a quantum port mobility solution, the device_id should be populated by a reference to the LBaaS HA cluster in some way to aid in troubleshooting. I did not see anything making the device_id significant.. it's not required on the port.</div>
<div><br></div><div>OR.. is L2 HA now screwed and.. </div><div><br></div><div>When an LBaaS device HA event is triggered, if ethernet like L2 behavior can not be assumed for every quantum plugin, we will need to have mobile fixed_ips moving between ports (hence MAC addresses). This is really making the LBaaS devices do L3 HA. The LBaaS device (or preferable the LBaaS service) would have to move the fixed_ips to a different quantum port (hence MAC address). You would have to update the first port removing the fixed_ips, then update the second port adding the fixed_ips, and pray that your fixed_ips don't get allocated in the however much time it takes between the two atomic quantum API actions. ('two atomic quantum actions' .. that's funny... somewhere a physicist just rolled over in his grave..) If this is true, we just lost switch clean up speeds and now have API update speeds dictating the time it takes to failover an HA networking device. That might be okay.. it might not.</div>
<div><br></div><div>Right now I am depend on L2 HA (port mobility) working. It's a big 'if' without dynamic MAC learning and expiration of the L2 forwarding table on GARP be spelled out in the quantum L2 specification. I had not expected the requirement for L3 addresses to be bound to L2 ports just to use quantum IPAM... Again..I'm just worried about what the 'binding' of a fixed_ip to a port 'means' to quantum L2 devices now....</div>
<div><br></div><div><div>Thoughts?</div><div><br></div><div>John </div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Nov 29, 2012 at 3:29 PM, John Gruber <span dir="ltr"><<a href="mailto:john.t.gruber@gmail.com" target="_blank">john.t.gruber@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">So I will have to go look at the code in the quantumclient ... I just created a json structure in a POST on /ports with fixed_ip set to an array of 4 json objects with only a subnet_id attribute set to a valid subnet id. My Quantum service created only a single fixed ip address. So I assume I can get it working like the quantum client code you showed does.. I'll get what you showed working so we'll have multiple fixed_ips allocated at port creation. <br>
<div><br></div><div>Any suggestions on how we add a fixed IP with Quantum allocating the address after the port creation? IOW.. how do we get additional fixed IP allocation by Quantum on PUT request to /ports. That's what will be needed if we are to add VIPs dynamically on a Quantum managed subnet. I assume we will not want to 'pre-allocate' a allocation_pool managed by the LBaaS from a tenant's subnet just to attach a device and define a VIP which is local to a tenant network. </div>
<div><br></div><div>As always.. I'm perfectly capable of missing something obvious <g>... </div><span class="HOEnZb"><font color="#888888"><div><br></div><div>John</div></font></span><div class="HOEnZb"><div class="h5">
<div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Nov 29, 2012 at 2:44 PM, Dan Wendlandt <span dir="ltr"><<a href="mailto:dan@nicira.com" target="_blank">dan@nicira.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br><br><div class="gmail_quote"><div>On Thu, Nov 29, 2012 at 12:11 PM, John Gruber <span dir="ltr"><<a href="mailto:john.t.gruber@gmail.com" target="_blank">john.t.gruber@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Looking for advice... <div><br></div><div>I am working with Quantum as an IPAM solution and preparing for the basic methods needed to get L3 addressing on the appropriate subnets for an LBaaS device cluster. </div><div><br>
</div><div>Because every L3 address object inside a L2 failover domain will require the same MAC address, I can't just create separate ports for each L3 object as the MAC address can only be 'in-use' on one port at a time for Quantum. This makes sense, but leads me to a problem. There is neither 1) a way to specify the number of fixed IP addresses you want allocated when you create a port nor 2) a way to update a port telling Quantum to allocate an additional fixed IPs to that port. </div>
<div><br></div><div>So I am left with tracking all the ports for a network, mapping them to subnets, doing IP address math to understand the start and end of the allocation pool, then trying to 'guess', with possibly repeated update calls to the port, what new set of fixed IPs I can put together, letting exceptions occur to tell me if I need to try again. This is a bad plan. I have not done this yet, but will if I have to. Basically reproducing a lot of Quantum IPAM business logic outside of Quantum. </div>
<div><br></div><div>Is this why it was suggested just to access the DB directly?</div><div><br></div><div>I'm really hoping that I'm wrong and there is a way in the Quantum API that I missed to simply get additional fixed IPs allocated for a port on the same subnet. Seems like an obvious requirement even for VM hosts with aliased IP addresses on the same quantum port, so I had assumed it was would be in the API syntax.</div>
<div><br></div><div>I see bugs for multiple floating IPs:</div><div><br></div><div><a href="https://bugs.launchpad.net/quantum/+bug/1057844" target="_blank">https://bugs.launchpad.net/quantum/+bug/1057844</a><br></div><div>
<br></div><div>
and advice on adding multiple fixed IPs on different subnets:</div><div><br></div><div><a href="https://lists.launchpad.net/openstack/msg17634.html" target="_blank">https://lists.launchpad.net/openstack/msg17634.html</a><br>
</div><div><br>
</div><div>But I am missing how to create multiple fixed_ips on the same subnet for the same port. Preferably calls to update a port and have Quantum allocate additional fixed_ips without the client having to already know which IPs it can have.<br>
</div></blockquote><div><br></div></div><div>From what I'm reading, it would be exactly like the example you linked to above, but specify the same subnet-id twice if you want both of the fixed IPs to be from the same subnet. </div>
<div><br></div><div><div>nicira@com-dev:~/devstack$ quantum subnet-list -c id -c network_id</div><div>+--------------------------------------+--------------------------------------+</div><div>| id | network_id |</div>
<div>+--------------------------------------+--------------------------------------+</div><div>| 2ff33f9d-0468-4d60-97c3-3a41e1ae1d25 | a1d4ed77-122a-42d6-97eb-0e8394005374 |</div><div>+--------------------------------------+--------------------------------------+</div>
</div><div><br></div><div><br></div><div><div>nicira@com-dev:~/devstack$ quantum port-create --fixed-ip subnet_id=2ff33f9d-0468-4d60-97c3-3a41e1ae1d25 --fixed-ip subnet_id=2ff33f9d-0468-4d60-97c3-3a41e1ae1d25 a1d4ed77-122a-42d6-97eb-0e8394005374</div>
<div>Created a new port:</div><div>+----------------+---------------------------------------------------------------------------------+</div><div>| Field | Value |</div>
<div>+----------------+---------------------------------------------------------------------------------+</div><div>| admin_state_up | True |</div>
<div>| device_id | |</div><div>| device_owner | |</div>
<div>| fixed_ips | {"subnet_id": "2ff33f9d-0468-4d60-97c3-3a41e1ae1d25", "ip_address": "10.1.0.7"} |</div><div>| | {"subnet_id": "2ff33f9d-0468-4d60-97c3-3a41e1ae1d25", "ip_address": "10.1.0.8"} |</div>
<div>| id | 20ae47bc-8708-4c65-b761-67d4c8672964 |</div><div>| mac_address | fa:16:3e:c2:82:74 |</div>
<div>| name | |</div><div>| network_id | a1d4ed77-122a-42d6-97eb-0e8394005374 |</div>
<div>| status | ACTIVE |</div><div>| tenant_id | 80a03bf4d7a04839a2ff149357733260 |</div>
<div>+----------------+---------------------------------------------------------------------------------+</div></div><div><br></div><div>Is this sufficient, or am I misunderstanding what you're asking?</div><div><br>
</div>
<div>Dan</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>
</div><div><br></div><div>I even see recent messages where the libvirt driver only support 1 IP per vif right now:</div><div><br></div><div><a href="http://www.gossamer-threads.com/lists/openstack/dev/20264" target="_blank">http://www.gossamer-threads.com/lists/openstack/dev/20264</a><br>
</div><div><br></div><div>That doesn't help the LBaaS device.</div><div><br></div><div>I didn't find anything obvious showing how to allocate multiple fixed_ips on the same port on the same subnet. Forgetting HA even for a minute, won't we need to allocate a fixed_ip for each VIP on a Quantum managed subnet? We might be doing this a lot no?</div>
<div><br></div><div>Am I missing something obvious? </div><span><font color="#888888"><div><br></div><div>John Gruber</div><div><br></div>
</font></span><br></div>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><span><font color="#888888"><br><br clear="all"><div><br></div>-- <br>~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>Dan Wendlandt <div>Nicira, Inc: <a href="http://www.nicira.com" target="_blank">www.nicira.com</a><br>
<div>twitter: danwendlandt<br>
~~~~~~~~~~~~~~~~~~~~~~~~~~~<br></div></div><br>
</font></span><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>