<div dir="auto">Thanks Slawek, I am going to check nova tables as well.<div dir="auto">Ignazio</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il Gio 12 Mar 2020, 22:22 Slawek Kaplonski <<a href="mailto:skaplons@redhat.com">skaplons@redhat.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
IIRC, if You want to manually change Your database to force nova to not use hybrid connection anymore and to not require qbr bridge You may need to update also one of the tables in Nova’s db. It’s called “instance_info_network_cache” or something similar.<br>
But TBH I’m not sure if live migration then will work or not as I’m not sure if instance’s libvirt.xml file isn’t going from src to dest node during the live migration.<br>
<br>
If You don’t need to do live migration, You can switch firewall_driver in the L2 agent’s config file and restart it. Even instances which has got hybrid connectivity (so are plugged through qbr bridge) will have SG working in new way. It shouldn’t be problem that those instances are plugged through qbr bridge as it finally ends up in br-int and there SG rules will be applied. You will need to manually clean iptables rules for such ports as it will not be cleaned automatically.<br>
New instances on such host should works fine and will be plugged in “new way”, directly to br-int.<br>
The only problem with this approach is that You will not be able to do live-migration for those old vms.<br>
<br>
If You want to do it properly, You should do “nova interface-detach” and then “nova interface-attach” for each of such instances. Then new ports plugged to the instances will be bound in new way and plugged directly to br-int. <br>
<br>
> On 12 Mar 2020, at 19:09, Ignazio Cassano <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>> wrote:<br>
> <br>
> James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ......<br>
> This is because the instance migrated has got interface under qbr.<br>
> Ignazio<br>
> <br>
> Il giorno gio 12 mar 2020 alle ore 13:30 James Denton <<a href="mailto:james.denton@rackspace.com" target="_blank" rel="noreferrer">james.denton@rackspace.com</a>> ha scritto:<br>
> Hi Ignazio,<br>
> <br>
> <br>
> <br>
> I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.<br>
> <br>
> <br>
> <br>
> Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:<br>
> <br>
> <br>
> <br>
> > use neutron;<br>
> <br>
> > update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;<br>
> <br>
> <br>
> <br>
> So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.<br>
> <br>
> <br>
> <br>
> Good luck!<br>
> <br>
> <br>
> <br>
> James<br>
> <br>
> <br>
> <br>
> From: Ignazio Cassano <<a href="mailto:ignaziocassano@gmail.com" target="_blank" rel="noreferrer">ignaziocassano@gmail.com</a>><br>
> Date: Thursday, March 12, 2020 at 6:41 AM<br>
> To: openstack-discuss <<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank" rel="noreferrer">openstack-discuss@lists.openstack.org</a>><br>
> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch<br>
> <br>
> <br>
> <br>
> CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!<br>
> <br>
> <br>
> <br>
> Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,<br>
> <br>
> I am doing this because I want enable security groups logs which require openvswitch firewall.<br>
> <br>
> I would like to migrate without restarting my instances.<br>
> <br>
> I startded moving all instances from compute node 1.<br>
> <br>
> Then I configured openvswitch firewall on compute node 1,<br>
> <br>
> Instances migrated from compute node 2 to compute node 1 without problems.<br>
> <br>
> Once the compute node 2 was empty, I migrated it to openvswitch.<br>
> <br>
> But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2<br>
> <br>
> <br>
> <br>
> This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.<br>
> <br>
> Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .<br>
> <br>
> So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.<br>
> <br>
> <br>
> <br>
> Any workaround, please ?<br>
> <br>
> <br>
> <br>
> Ignazio<br>
> <br>
<br>
— <br>
Slawek Kaplonski<br>
Senior software engineer<br>
Red Hat<br>
<br>
</blockquote></div>