[openstack-dev] [Quantum] [LBaaS] Quantum create port
John Gruber
john.t.gruber at gmail.com
Fri Nov 30 00:27:29 UTC 2012
Okay.. creating multiple fixed IPs on the port create call works just fine.
That definitely makes device level addressing feasible because each device
will have its own port (MAC address). Easy enough....
Need some validation from the quantum L2 folk... Quantum IPAM isn't really
IPAM it's 'IPAM bound to L2 networking', which may come with some
assumptions we aren't thinking about..
So if we are to define a VIP which needs an L3 address within a subnet
created on a tenant quantum network (making allocation_pools not something
you can decide before you need them), we will need to attach it to a port
(hence a MAC address) in order for quantum to see the L3 address as
'allocated'. If it is not 'allocated', other tenant generated devices (
nova vm or otherwise ) could have the same VIP L3 address 'allocated' to
them by quantum. As long as a fixed_ip is managed by quantum, it must be
'tied' to a port (thus a MAC address) to avoid it being assigned otherwise.
I had not expected that.
Just to cover our bases... My question is: Does assigning a fixed IP to a
port (MAC address) necessarily dictate anything for quantum L2 networking?
If so, what does the mean for fixed_ip mobility and HA? Did we just bind
L2 to L3 in quantum and make it a necessary update for all our HA
mechanisms? I hope not..
What happens when the LBaaS device's HA mechanism decides that a VIP L3
address is no longer 'bound' to the MAC address of the quantum port and
needs to move it to a different MAC address. Will that screw up anything at
L2 for quantum now that they are tied together in its data model?
If it helps, we can mandate the use of MAC masquerading (with either
quantum generated or LBaaS generated MAC address ..doesn't matter) for the
quantum ports we create for LBaaS devices. That works. In an HA event,
frames from the port's MAC address can start coming from any device in the
LBaaS cluster. The assumption is that whatever LBaaS device takes over the
port MAC address and sends out GARPs for the fixed_ips , the associated
quantum L2 plugin devices will clean up themselves and will forward that
port's traffic to it's new home. This is how physical L2 switching without
L3 bindings work today, but in my testing I am only talking to one ovs.. so
of course that works. I didn't see anything in quantum 'dictating' L2 HA
behavior had to work, but I am counting on it. It's an assumption. If I
had not bound my L3 address to L2 MACs in quantum just to get some
addressing allocated I would be feeling better about the assumption.
NOTE: Quantum ports have a device_id which for nova vms gets set to the
vNIC (sorry I used the term vif earlier..) generated id, but which is not
required, nor validated by quantum. I assume that for a quantum port
mobility solution, the device_id should be populated by a reference to the
LBaaS HA cluster in some way to aid in troubleshooting. I did not see
anything making the device_id significant.. it's not required on the port.
OR.. is L2 HA now screwed and..
When an LBaaS device HA event is triggered, if ethernet like L2 behavior
can not be assumed for every quantum plugin, we will need to have mobile
fixed_ips moving between ports (hence MAC addresses). This is really
making the LBaaS devices do L3 HA. The LBaaS device (or preferable the
LBaaS service) would have to move the fixed_ips to a different quantum port
(hence MAC address). You would have to update the first port removing the
fixed_ips, then update the second port adding the fixed_ips, and pray that
your fixed_ips don't get allocated in the however much time it takes
between the two atomic quantum API actions. ('two atomic quantum actions'
.. that's funny... somewhere a physicist just rolled over in his grave..)
If this is true, we just lost switch clean up speeds and now have API
update speeds dictating the time it takes to failover an HA networking
device. That might be okay.. it might not.
Right now I am depend on L2 HA (port mobility) working. It's a big 'if'
without dynamic MAC learning and expiration of the L2 forwarding table on
GARP be spelled out in the quantum L2 specification. I had not expected the
requirement for L3 addresses to be bound to L2 ports just to use quantum
IPAM... Again..I'm just worried about what the 'binding' of a fixed_ip to a
port 'means' to quantum L2 devices now....
Thoughts?
John
On Thu, Nov 29, 2012 at 3:29 PM, John Gruber <john.t.gruber at gmail.com>wrote:
> So I will have to go look at the code in the quantumclient ... I just
> created a json structure in a POST on /ports with fixed_ip set to an array
> of 4 json objects with only a subnet_id attribute set to a valid subnet id.
> My Quantum service created only a single fixed ip address. So I assume I
> can get it working like the quantum client code you showed does.. I'll get
> what you showed working so we'll have multiple fixed_ips allocated at port
> creation.
>
> Any suggestions on how we add a fixed IP with Quantum allocating the
> address after the port creation? IOW.. how do we get additional fixed IP
> allocation by Quantum on PUT request to /ports. That's what will be needed
> if we are to add VIPs dynamically on a Quantum managed subnet. I assume we
> will not want to 'pre-allocate' a allocation_pool managed by the LBaaS from
> a tenant's subnet just to attach a device and define a VIP which is local
> to a tenant network.
>
> As always.. I'm perfectly capable of missing something obvious <g>...
>
> John
>
>
>
> On Thu, Nov 29, 2012 at 2:44 PM, Dan Wendlandt <dan at nicira.com> wrote:
>
>>
>>
>> On Thu, Nov 29, 2012 at 12:11 PM, John Gruber <john.t.gruber at gmail.com>wrote:
>>
>>> Looking for advice...
>>>
>>> I am working with Quantum as an IPAM solution and preparing for the
>>> basic methods needed to get L3 addressing on the appropriate subnets for an
>>> LBaaS device cluster.
>>>
>>> Because every L3 address object inside a L2 failover domain will require
>>> the same MAC address, I can't just create separate ports for each L3 object
>>> as the MAC address can only be 'in-use' on one port at a time for Quantum.
>>> This makes sense, but leads me to a problem. There is neither 1) a way to
>>> specify the number of fixed IP addresses you want allocated when you create
>>> a port nor 2) a way to update a port telling Quantum to allocate an
>>> additional fixed IPs to that port.
>>>
>>> So I am left with tracking all the ports for a network, mapping them to
>>> subnets, doing IP address math to understand the start and end of the
>>> allocation pool, then trying to 'guess', with possibly repeated update
>>> calls to the port, what new set of fixed IPs I can put together, letting
>>> exceptions occur to tell me if I need to try again. This is a bad plan. I
>>> have not done this yet, but will if I have to. Basically reproducing a lot
>>> of Quantum IPAM business logic outside of Quantum.
>>>
>>> Is this why it was suggested just to access the DB directly?
>>>
>>> I'm really hoping that I'm wrong and there is a way in the Quantum API
>>> that I missed to simply get additional fixed IPs allocated for a port on
>>> the same subnet. Seems like an obvious requirement even for VM hosts with
>>> aliased IP addresses on the same quantum port, so I had assumed it was
>>> would be in the API syntax.
>>>
>>> I see bugs for multiple floating IPs:
>>>
>>> https://bugs.launchpad.net/quantum/+bug/1057844
>>>
>>> and advice on adding multiple fixed IPs on different subnets:
>>>
>>> https://lists.launchpad.net/openstack/msg17634.html
>>>
>>> But I am missing how to create multiple fixed_ips on the same subnet for
>>> the same port. Preferably calls to update a port and have Quantum allocate
>>> additional fixed_ips without the client having to already know which IPs it
>>> can have.
>>>
>>
>> From what I'm reading, it would be exactly like the example you linked to
>> above, but specify the same subnet-id twice if you want both of the fixed
>> IPs to be from the same subnet.
>>
>> nicira at com-dev:~/devstack$ quantum subnet-list -c id -c network_id
>>
>> +--------------------------------------+--------------------------------------+
>> | id | network_id
>> |
>>
>> +--------------------------------------+--------------------------------------+
>> | 2ff33f9d-0468-4d60-97c3-3a41e1ae1d25 |
>> a1d4ed77-122a-42d6-97eb-0e8394005374 |
>>
>> +--------------------------------------+--------------------------------------+
>>
>>
>> nicira at com-dev:~/devstack$ quantum port-create --fixed-ip
>> subnet_id=2ff33f9d-0468-4d60-97c3-3a41e1ae1d25 --fixed-ip
>> subnet_id=2ff33f9d-0468-4d60-97c3-3a41e1ae1d25
>> a1d4ed77-122a-42d6-97eb-0e8394005374
>> Created a new port:
>>
>> +----------------+---------------------------------------------------------------------------------+
>> | Field | Value
>> |
>>
>> +----------------+---------------------------------------------------------------------------------+
>> | admin_state_up | True
>> |
>> | device_id |
>> |
>> | device_owner |
>> |
>> | fixed_ips | {"subnet_id": "2ff33f9d-0468-4d60-97c3-3a41e1ae1d25",
>> "ip_address": "10.1.0.7"} |
>> | | {"subnet_id": "2ff33f9d-0468-4d60-97c3-3a41e1ae1d25",
>> "ip_address": "10.1.0.8"} |
>> | id | 20ae47bc-8708-4c65-b761-67d4c8672964
>> |
>> | mac_address | fa:16:3e:c2:82:74
>> |
>> | name |
>> |
>> | network_id | a1d4ed77-122a-42d6-97eb-0e8394005374
>> |
>> | status | ACTIVE
>> |
>> | tenant_id | 80a03bf4d7a04839a2ff149357733260
>> |
>>
>> +----------------+---------------------------------------------------------------------------------+
>>
>> Is this sufficient, or am I misunderstanding what you're asking?
>>
>> Dan
>>
>>
>>
>>>
>>> I even see recent messages where the libvirt driver only support 1 IP
>>> per vif right now:
>>>
>>> http://www.gossamer-threads.com/lists/openstack/dev/20264
>>>
>>> That doesn't help the LBaaS device.
>>>
>>> I didn't find anything obvious showing how to allocate multiple
>>> fixed_ips on the same port on the same subnet. Forgetting HA even for a
>>> minute, won't we need to allocate a fixed_ip for each VIP on a Quantum
>>> managed subnet? We might be doing this a lot no?
>>>
>>> Am I missing something obvious?
>>>
>>> John Gruber
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Dan Wendlandt
>> Nicira, Inc: www.nicira.com
>> twitter: danwendlandt
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121129/979e512c/attachment.html>
More information about the OpenStack-dev
mailing list