Floating IP's for routed networks

Rodolfo Alonso Hernandez ralonsoh at redhat.com
Wed Jul 15 14:09:53 UTC 2020


Hi Thomas:

If I'm not wrong, the goal of this filtering is to remove all those subnets
with service_type='network:routed'. Maybe you can check implementing an
easier query:
SELECT subnets.segment_id AS subnets_segment_id
FROM subnets
WHERE subnets.network_id = %(network_id_1)s AND NOT (EXISTS (SELECT *
FROM subnet_service_types
WHERE subnets.id = subnet_service_types.subnet_id AND
subnet_service_types.service_type = %(service_type_1)s))

That will be translated to python as:

query = test_db.context.session.query(subnet_obj.Subnet.db_model.segment_id)
query = query.filter(subnet_obj.Subnet.db_model.network_id == network_id)
if filtered_service_type:
    query = query.filter(~exists().where(and_(
        subnet_obj.Subnet.db_model.id == service_type_model.subnet_id,
        service_type_model.service_type == filtered_service_type)))

Can you provide a UTs or a way to check the problem you are experiencing?

Regards.


On Wed, Jul 15, 2020 at 1:27 PM Thomas Goirand <zigo at debian.org> wrote:

> Sending the message again with the correct From, as I'm not subscribed
> to the list with the other mailbox.
>
> On 7/15/20 2:13 PM, Thomas Goirand wrote:
> > Hi Ryan,
> >
> > If you don't mind, I'm adding the openstack-discuss list in the loop, as
> > this topic may be of interest to others.
> >
> > For mailing list readers, I'm trying to implement this:
> > https://review.opendev.org/#/c/669395/
> > but I'm having some difficulties.
> >
> > I did a bit of investigation with some added LOG.info() in the code.
> >
> > When doing:
> >
> >> openstack subnet create vm-fip \
> >>         --subnet-range 10.66.20.0/24 \
> >>         --service-type 'network:routed' \
> >>         --service-type 'network:floatingip' \
> >>         --network multisegment1
> >
> > Here's where neutron-api crashes. in db/ipam_backend_mixin.py:
> >
> >     def _validate_segment(self, context, network_id, segment_id,
> > action=None,
> >                           old_segment_id=None):
> >         # TODO(tidwellr) Create and use a constant for the service type
> >         segments = subnet_obj.Subnet.get_subnet_segment_ids(
> >             context, network_id, filtered_service_type='network:routed')
> >
> >         associated_segments = set(segments)
> >         if None in associated_segments and len(associated_segments) > 1:
> >             raise segment_exc.SubnetsNotAllAssociatedWithSegments(
> >                 network_id=network_id)
> >
> > SubnetsNotAllAssociatedWithSegments() is raised, as you must already
> > guessed. Here's the values...
> >
> > associated_segments is an array containing 3 values: 2 being the IDs of
> > the segments I added previously, the 3rd one being None. This test is
> > then matched. Where is that None value coming from? Is this the new
> > subnet I'm trying to add? Maybe the
> > filtered_service_type='network:routed' in the call:
> > subnet_obj.Subnet.get_subnet_segment_ids() isn't working as expected?
> >
> > Printing the SQL query that is checked shows:
> >
> > SELECT subnets.segment_id AS subnets_segment_id FROM subnets
> > WHERE subnets.network_id = %(network_id_1)s AND subnets.id NOT IN
> > (SELECT subnet_service_types.subnet_id AS subnet_service_types_subnet_id
> > FROM subnet_service_types
> > WHERE subnets.network_id = %(network_id_2)s AND
> > subnet_service_types.subnet_id = subnets.id AND
> > subnet_service_types.service_type = %(service_type_1)s)
> >
> > though when doing by hand:
> >
> > SELECT subnets.segment_id AS subnets_segment_id FROM subnets
> >
> > the db has only 2 subnets, so it looks like the floating-ip subnet got
> > added before the check, and is then removed when the above test fails.
> >
> > So I just removed the raise, and could add the subnet I wanted, but
> > that's obviously not a long term solution.
> >
> > Your thoughts?
> >
> > Another problem that I'm having, is that neutron-bgp-dragent is not
> > receiving (or processing) the messages from neutron-rpc-server. I've
> > enabled DEBUG mode for oslo_messaging, and found out that when dr-agent
> > starts and prints "Agent has just been revived. Scheduling full sync",
> > it does send a message to neutron-rpc-server, which is replied, but it
> > doesn't look like dr-agent processes the return message in its reply
> > queue, and then prints in the logs: "imeout in RPC method
> > get_bgp_speakers. Waiting for 17 seconds before next attempt. If the
> > server is not down, consider increasing the rpc_response_timeout option
> > as Neutron server(s) may be overloaded and unable to respond quickly
> > enough.: oslo_messaging.exceptions.MessagingTimeout: Timed out waiting
> > for a reply to message ID c1b401c9e10d481bb5e071f2c048e480". What is
> > weird is that a few times (rarely), it worked, and the agent gets the
> reply.
> >
> > What should I do to investigate further?
> >
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200715/e4a939da/attachment.html>


More information about the openstack-discuss mailing list