[qeeens][neutron] migrating from iptables_hybrid to openvswitch

James Denton james.denton at rackspace.com
Thu Mar 12 16:18:21 UTC 2020


Hi Ignazio,

> but the node where I migrated continues to create the qbr bridge also if it is configured with openvswitch firewall.

I assume the neutron-openvswitch-agent has been restarted since making the firewall_driver change? What happens if you create a new VM on that compute?

> The difference if that if I do not update the database I can ping the migrated vm, while now I cannot

Can you look at br-int and see if your tap interface is connected without a vlan tag? Or is the tap still connected to the qbr bridge? If the latter, were any iptables rules created?

Unfortunately, I don’t have the ability to test this w/ live migration.

James

From: Ignazio Cassano <ignaziocassano at gmail.com>
Date: Thursday, March 12, 2020 at 11:50 AM
To: James Denton <james.denton at rackspace.com>
Cc: openstack-discuss <openstack-discuss at lists.openstack.org>
Subject: Re: [qeeens][neutron] migrating from iptables_hybrid to openvswitch

CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!

Hello James; I made the db update before live migrating the virtual machine, but the node where I migrated continues to create the qbr bridge also if it is configured with openvswitch firewall.
The difference if that if I do not update the database I can ping the migrated vm, while now I cannot
Ignazio

Il giorno gio 12 mar 2020 alle ore 13:30 James Denton <james.denton at rackspace.com<mailto:james.denton at rackspace.com>> ha scritto:
Hi Ignazio,

I  tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.

Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:

> use neutron;
> update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;

So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.

Good luck!

James

From: Ignazio Cassano <ignaziocassano at gmail.com<mailto:ignaziocassano at gmail.com>>
Date: Thursday, March 12, 2020 at 6:41 AM
To: openstack-discuss <openstack-discuss at lists.openstack.org<mailto:openstack-discuss at lists.openstack.org>>
Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch

CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!

Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2

This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by  openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.

Any workaround, please ?

Ignazio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200312/f81660c5/attachment.html>


More information about the openstack-discuss mailing list