[qeeens][neutron] migrating from iptables_hybrid to openvswitch
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2 This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances. Any workaround, please ? Ignazio
Hi Ignazio, I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that. Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron; update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick. Good luck! James From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2 This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances. Any workaround, please ? Ignazio
Hello James, I will try that. Many thanks Ignazio Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto:
Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
*From: *Ignazio Cassano <ignaziocassano@gmail.com> *Date: *Thursday, March 12, 2020 at 6:41 AM *To: *openstack-discuss <openstack-discuss@lists.openstack.org> *Subject: *[qeeens][neutron] migrating from iptables_hybrid to openvswitch
*CAUTION:* This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
Hello James; I made the db update before live migrating the virtual machine, but the node where I migrated continues to create the qbr bridge also if it is configured with openvswitch firewall. The difference if that if I do not update the database I can ping the migrated vm, while now I cannot Ignazio Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto:
Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
*From: *Ignazio Cassano <ignaziocassano@gmail.com> *Date: *Thursday, March 12, 2020 at 6:41 AM *To: *openstack-discuss <openstack-discuss@lists.openstack.org> *Subject: *[qeeens][neutron] migrating from iptables_hybrid to openvswitch
*CAUTION:* This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
Hi Ignazio,
but the node where I migrated continues to create the qbr bridge also if it is configured with openvswitch firewall.
I assume the neutron-openvswitch-agent has been restarted since making the firewall_driver change? What happens if you create a new VM on that compute?
The difference if that if I do not update the database I can ping the migrated vm, while now I cannot
Can you look at br-int and see if your tap interface is connected without a vlan tag? Or is the tap still connected to the qbr bridge? If the latter, were any iptables rules created? Unfortunately, I don’t have the ability to test this w/ live migration. James From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Thursday, March 12, 2020 at 11:50 AM To: James Denton <james.denton@rackspace.com> Cc: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: Re: [qeeens][neutron] migrating from iptables_hybrid to openvswitch CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello James; I made the db update before live migrating the virtual machine, but the node where I migrated continues to create the qbr bridge also if it is configured with openvswitch firewall. The difference if that if I do not update the database I can ping the migrated vm, while now I cannot Ignazio Il giorno gio 12 mar 2020 alle ore 13:30 James Denton <james.denton@rackspace.com<mailto:james.denton@rackspace.com>> ha scritto: Hi Ignazio, I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that. Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron; update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick. Good luck! James From: Ignazio Cassano <ignaziocassano@gmail.com<mailto:ignaziocassano@gmail.com>> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org>> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2 This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances. Any workaround, please ? Ignazio
O yes, I restarted the openvswitch to make the driver change. I did not see into openvswitch I ran: virsh domiflist instancename and I saw the instance interface was under qbr (and this is the problem), I aldo tried on the migrated node to detach the interface from qbr and attach it under br-int like suggested with the method suggested at this link: https://docs.openstack.org/neutron/pike/contributor/internals/openvswitch_fi... But virsh does not allow to change the bridge on the fly Ignazio Il giorno gio 12 mar 2020 alle ore 17:18 James Denton < james.denton@rackspace.com> ha scritto:
Hi Ignazio,
but the node where I migrated continues to create the qbr bridge also if it is configured with openvswitch firewall.
I assume the neutron-openvswitch-agent has been restarted since making the firewall_driver change? What happens if you create a new VM on that compute?
The difference if that if I do not update the database I can ping the migrated vm, while now I cannot
Can you look at br-int and see if your tap interface is connected without a vlan tag? Or is the tap still connected to the qbr bridge? If the latter, were any iptables rules created?
Unfortunately, I don’t have the ability to test this w/ live migration.
James
*From: *Ignazio Cassano <ignaziocassano@gmail.com> *Date: *Thursday, March 12, 2020 at 11:50 AM *To: *James Denton <james.denton@rackspace.com> *Cc: *openstack-discuss <openstack-discuss@lists.openstack.org> *Subject: *Re: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
*CAUTION:* This message originated externally, please use caution when clicking on links or opening attachments!
Hello James; I made the db update before live migrating the virtual machine, but the node where I migrated continues to create the qbr bridge also if it is configured with openvswitch firewall.
The difference if that if I do not update the database I can ping the migrated vm, while now I cannot
Ignazio
Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto:
Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
*From: *Ignazio Cassano <ignaziocassano@gmail.com> *Date: *Thursday, March 12, 2020 at 6:41 AM *To: *openstack-discuss <openstack-discuss@lists.openstack.org> *Subject: *[qeeens][neutron] migrating from iptables_hybrid to openvswitch
*CAUTION:* This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ...... This is because the instance migrated has got interface under qbr. Ignazio Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto:
Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
*From: *Ignazio Cassano <ignaziocassano@gmail.com> *Date: *Thursday, March 12, 2020 at 6:41 AM *To: *openstack-discuss <openstack-discuss@lists.openstack.org> *Subject: *[qeeens][neutron] migrating from iptables_hybrid to openvswitch
*CAUTION:* This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
Hi, IIRC, if You want to manually change Your database to force nova to not use hybrid connection anymore and to not require qbr bridge You may need to update also one of the tables in Nova’s db. It’s called “instance_info_network_cache” or something similar. But TBH I’m not sure if live migration then will work or not as I’m not sure if instance’s libvirt.xml file isn’t going from src to dest node during the live migration. If You don’t need to do live migration, You can switch firewall_driver in the L2 agent’s config file and restart it. Even instances which has got hybrid connectivity (so are plugged through qbr bridge) will have SG working in new way. It shouldn’t be problem that those instances are plugged through qbr bridge as it finally ends up in br-int and there SG rules will be applied. You will need to manually clean iptables rules for such ports as it will not be cleaned automatically. New instances on such host should works fine and will be plugged in “new way”, directly to br-int. The only problem with this approach is that You will not be able to do live-migration for those old vms. If You want to do it properly, You should do “nova interface-detach” and then “nova interface-attach” for each of such instances. Then new ports plugged to the instances will be bound in new way and plugged directly to br-int.
On 12 Mar 2020, at 19:09, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ...... This is because the instance migrated has got interface under qbr. Ignazio
Il giorno gio 12 mar 2020 alle ore 13:30 James Denton <james.denton@rackspace.com> ha scritto: Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
— Slawek Kaplonski Senior software engineer Red Hat
Thanks Slawek, I am going to check nova tables as well. Ignazio Il Gio 12 Mar 2020, 22:22 Slawek Kaplonski <skaplons@redhat.com> ha scritto:
Hi,
IIRC, if You want to manually change Your database to force nova to not use hybrid connection anymore and to not require qbr bridge You may need to update also one of the tables in Nova’s db. It’s called “instance_info_network_cache” or something similar. But TBH I’m not sure if live migration then will work or not as I’m not sure if instance’s libvirt.xml file isn’t going from src to dest node during the live migration.
If You don’t need to do live migration, You can switch firewall_driver in the L2 agent’s config file and restart it. Even instances which has got hybrid connectivity (so are plugged through qbr bridge) will have SG working in new way. It shouldn’t be problem that those instances are plugged through qbr bridge as it finally ends up in br-int and there SG rules will be applied. You will need to manually clean iptables rules for such ports as it will not be cleaned automatically. New instances on such host should works fine and will be plugged in “new way”, directly to br-int. The only problem with this approach is that You will not be able to do live-migration for those old vms.
If You want to do it properly, You should do “nova interface-detach” and then “nova interface-attach” for each of such instances. Then new ports plugged to the instances will be bound in new way and plugged directly to br-int.
On 12 Mar 2020, at 19:09, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ...... This is because the instance migrated has got interface under qbr. Ignazio
Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto: Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
— Slawek Kaplonski Senior software engineer Red Hat
Hello Slawek, I tried with nova detach and attach on a running vm after switch to openvswitch driver but it stops to receive packets also if put the interface under br-int as you said. The result is virsh domiflist domainname list two interfaces: one under qbr ad one under br-int Ignazio Il giorno gio 12 mar 2020 alle ore 22:22 Slawek Kaplonski < skaplons@redhat.com> ha scritto:
Hi,
IIRC, if You want to manually change Your database to force nova to not use hybrid connection anymore and to not require qbr bridge You may need to update also one of the tables in Nova’s db. It’s called “instance_info_network_cache” or something similar. But TBH I’m not sure if live migration then will work or not as I’m not sure if instance’s libvirt.xml file isn’t going from src to dest node during the live migration.
If You don’t need to do live migration, You can switch firewall_driver in the L2 agent’s config file and restart it. Even instances which has got hybrid connectivity (so are plugged through qbr bridge) will have SG working in new way. It shouldn’t be problem that those instances are plugged through qbr bridge as it finally ends up in br-int and there SG rules will be applied. You will need to manually clean iptables rules for such ports as it will not be cleaned automatically. New instances on such host should works fine and will be plugged in “new way”, directly to br-int. The only problem with this approach is that You will not be able to do live-migration for those old vms.
If You want to do it properly, You should do “nova interface-detach” and then “nova interface-attach” for each of such instances. Then new ports plugged to the instances will be bound in new way and plugged directly to br-int.
On 12 Mar 2020, at 19:09, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ...... This is because the instance migrated has got interface under qbr. Ignazio
Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto: Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
— Slawek Kaplonski Senior software engineer Red Hat
Hi All, first of all I want to thank everyone for help me. I I tried all your suggestions but without rebooting instances does not work. The following steps do the job well: - evacuate node A and switch to openvswitch - migrate (no live) instances from node B - switch node B to openvswitch but before activating the agent, clean Its openswitch entries otherwhise live migration from node A does not work - and so on for all remaing nodes Ignazio Il Gio 12 Mar 2020, 22:22 Slawek Kaplonski <skaplons@redhat.com> ha scritto:
Hi,
IIRC, if You want to manually change Your database to force nova to not use hybrid connection anymore and to not require qbr bridge You may need to update also one of the tables in Nova’s db. It’s called “instance_info_network_cache” or something similar. But TBH I’m not sure if live migration then will work or not as I’m not sure if instance’s libvirt.xml file isn’t going from src to dest node during the live migration.
If You don’t need to do live migration, You can switch firewall_driver in the L2 agent’s config file and restart it. Even instances which has got hybrid connectivity (so are plugged through qbr bridge) will have SG working in new way. It shouldn’t be problem that those instances are plugged through qbr bridge as it finally ends up in br-int and there SG rules will be applied. You will need to manually clean iptables rules for such ports as it will not be cleaned automatically. New instances on such host should works fine and will be plugged in “new way”, directly to br-int. The only problem with this approach is that You will not be able to do live-migration for those old vms.
If You want to do it properly, You should do “nova interface-detach” and then “nova interface-attach” for each of such instances. Then new ports plugged to the instances will be bound in new way and plugged directly to br-int.
On 12 Mar 2020, at 19:09, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ...... This is because the instance migrated has got interface under qbr. Ignazio
Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto: Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
— Slawek Kaplonski Senior software engineer Red Hat
Thanks for the update! If you can work out the exact process for someone else to follow, I’m sure the docs team would appreciate your input! James From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Friday, March 13, 2020 at 1:38 PM To: Slawek Kaplonski <skaplons@redhat.com> Cc: James Denton <james.denton@rackspace.com>, openstack-discuss <openstack-discuss@lists.openstack.org> Subject: Re: [qeeens][neutron] migrating from iptables_hybrid to openvswitch CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hi All, first of all I want to thank everyone for help me. I I tried all your suggestions but without rebooting instances does not work. The following steps do the job well: - evacuate node A and switch to openvswitch - migrate (no live) instances from node B - switch node B to openvswitch but before activating the agent, clean Its openswitch entries otherwhise live migration from node A does not work - and so on for all remaing nodes Ignazio Il Gio 12 Mar 2020, 22:22 Slawek Kaplonski <skaplons@redhat.com<mailto:skaplons@redhat.com>> ha scritto: Hi, IIRC, if You want to manually change Your database to force nova to not use hybrid connection anymore and to not require qbr bridge You may need to update also one of the tables in Nova’s db. It’s called “instance_info_network_cache” or something similar. But TBH I’m not sure if live migration then will work or not as I’m not sure if instance’s libvirt.xml file isn’t going from src to dest node during the live migration. If You don’t need to do live migration, You can switch firewall_driver in the L2 agent’s config file and restart it. Even instances which has got hybrid connectivity (so are plugged through qbr bridge) will have SG working in new way. It shouldn’t be problem that those instances are plugged through qbr bridge as it finally ends up in br-int and there SG rules will be applied. You will need to manually clean iptables rules for such ports as it will not be cleaned automatically. New instances on such host should works fine and will be plugged in “new way”, directly to br-int. The only problem with this approach is that You will not be able to do live-migration for those old vms. If You want to do it properly, You should do “nova interface-detach” and then “nova interface-attach” for each of such instances. Then new ports plugged to the instances will be bound in new way and plugged directly to br-int.
On 12 Mar 2020, at 19:09, Ignazio Cassano <ignaziocassano@gmail.com<mailto:ignaziocassano@gmail.com>> wrote:
James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ...... This is because the instance migrated has got interface under qbr. Ignazio
Il giorno gio 12 mar 2020 alle ore 13:30 James Denton <james.denton@rackspace.com<mailto:james.denton@rackspace.com>> ha scritto: Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
From: Ignazio Cassano <ignaziocassano@gmail.com<mailto:ignaziocassano@gmail.com>> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org>> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
— Slawek Kaplonski Senior software engineer Red Hat
How can I do that? Where I must write the exact process? Ignazio Il Ven 13 Mar 2020, 18:59 James Denton <james.denton@rackspace.com> ha scritto:
Thanks for the update! If you can work out the exact process for someone else to follow, I’m sure the docs team would appreciate your input!
James
*From: *Ignazio Cassano <ignaziocassano@gmail.com> *Date: *Friday, March 13, 2020 at 1:38 PM *To: *Slawek Kaplonski <skaplons@redhat.com> *Cc: *James Denton <james.denton@rackspace.com>, openstack-discuss < openstack-discuss@lists.openstack.org> *Subject: *Re: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
*CAUTION:* This message originated externally, please use caution when clicking on links or opening attachments!
Hi All, first of all I want to thank everyone for help me.
I I tried all your suggestions but without rebooting instances does not work.
The following steps do the job well:
- evacuate node A and switch to openvswitch
- migrate (no live) instances from node B
- switch node B to openvswitch but before activating the agent, clean Its openswitch entries otherwhise live migration from node A does not work
- and so on for all remaing nodes
Ignazio
Il Gio 12 Mar 2020, 22:22 Slawek Kaplonski <skaplons@redhat.com> ha scritto:
Hi,
IIRC, if You want to manually change Your database to force nova to not use hybrid connection anymore and to not require qbr bridge You may need to update also one of the tables in Nova’s db. It’s called “instance_info_network_cache” or something similar. But TBH I’m not sure if live migration then will work or not as I’m not sure if instance’s libvirt.xml file isn’t going from src to dest node during the live migration.
If You don’t need to do live migration, You can switch firewall_driver in the L2 agent’s config file and restart it. Even instances which has got hybrid connectivity (so are plugged through qbr bridge) will have SG working in new way. It shouldn’t be problem that those instances are plugged through qbr bridge as it finally ends up in br-int and there SG rules will be applied. You will need to manually clean iptables rules for such ports as it will not be cleaned automatically. New instances on such host should works fine and will be plugged in “new way”, directly to br-int. The only problem with this approach is that You will not be able to do live-migration for those old vms.
If You want to do it properly, You should do “nova interface-detach” and then “nova interface-attach” for each of such instances. Then new ports plugged to the instances will be bound in new way and plugged directly to br-int.
On 12 Mar 2020, at 19:09, Ignazio Cassano <ignaziocassano@gmail.com> wrote:
James, I checked again with your method. While live migration phase, the informations on neutron db are changed automatically and returns with "system", "ovs_hybrid_plug": True} ...... This is because the instance migrated has got interface under qbr. Ignazio
Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton@rackspace.com> ha scritto: Hi Ignazio,
I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that.
Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this:
use neutron;
update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1;
So, perhaps making that change prior to moving the VM back to the other compute node will do the trick.
Good luck!
James
From: Ignazio Cassano <ignaziocassano@gmail.com> Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss <openstack-discuss@lists.openstack.org> Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch
CAUTION: This message originated externally, please use caution when clicking on links or opening attachments!
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens,
I am doing this because I want enable security groups logs which require openvswitch firewall.
I would like to migrate without restarting my instances.
I startded moving all instances from compute node 1.
Then I configured openvswitch firewall on compute node 1,
Instances migrated from compute node 2 to compute node 1 without problems.
Once the compute node 2 was empty, I migrated it to openvswitch.
But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1.
Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 .
So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
— Slawek Kaplonski Senior software engineer Red Hat
On 12/03/2020 11:38, Ignazio Cassano wrote:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch. 1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind. 2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work. Hope that helps. Kuba
Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched on openvswitch works fine . The problem is this migration create the qbr on the mode switched to openvswitch. But when I switch another compute node to openvswitch and I try to live migrate the same vm (openvswitch to qopenswitch) it does not work because the qbr presence. I verified on nova logs. Ignazio Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha scritto:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without
On 12/03/2020 11:38, Ignazio Cassano wrote: problems.
Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
On 13/03/2020 08:24, Ignazio Cassano wrote:
Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched on openvswitch works fine . The problem is this migration create the qbr on the mode switched to openvswitch. But when I switch another compute node to openvswitch and I try to live migrate the same vm (openvswitch to qopenswitch) it does not work because the qbr presence. I verified on nova logs. Ignazio
Hi Ignazio, I think the first step - migrating from hybrid_iptables to ovs should not create the qbr on the target node. It sounds like a bug - IIRC the libvirt domxml should not have the qbr when migrating.
Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha scritto:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without
On 12/03/2020 11:38, Ignazio Cassano wrote: problems.
Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
Hello Jakub, I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support. Thanks Ignazio Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar <jlibosva@redhat.com> ha scritto:
On 13/03/2020 08:24, Ignazio Cassano wrote:
Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched on openvswitch works fine . The problem is this migration create the qbr on the mode switched to openvswitch. But when I switch another compute node to openvswitch and I try to live migrate the same vm (openvswitch to qopenswitch) it does not work because the qbr presence. I verified on nova logs. Ignazio
Hi Ignazio,
I think the first step - migrating from hybrid_iptables to ovs should not create the qbr on the target node. It sounds like a bug - IIRC the libvirt domxml should not have the qbr when migrating.
Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha
scritto:
On 12/03/2020 11:38, Ignazio Cassano wrote:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which
require
openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
Hello Ignazio, Does your openstack environment using self-service network ? I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good. On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Jakub, I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support. Thanks Ignazio
Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < jlibosva@redhat.com> ha scritto:
On 13/03/2020 08:24, Ignazio Cassano wrote:
Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched on openvswitch works fine . The problem is this migration create the qbr on the mode switched to openvswitch. But when I switch another compute node to openvswitch and I try to live migrate the same vm (openvswitch to qopenswitch) it does not work because the qbr presence. I verified on nova logs. Ignazio
Hi Ignazio,
I think the first step - migrating from hybrid_iptables to ovs should not create the qbr on the target node. It sounds like a bug - IIRC the libvirt domxml should not have the qbr when migrating.
Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha
scritto:
On 12/03/2020 11:38, Ignazio Cassano wrote:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which
require
openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes. Do you mean security groups do not work on provider networks ? Ignazio Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
Hello Ignazio,
Does your openstack environment using self-service network ?
I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good.
On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Jakub, I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support. Thanks Ignazio
Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < jlibosva@redhat.com> ha scritto:
On 13/03/2020 08:24, Ignazio Cassano wrote:
Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched on openvswitch works fine . The problem is this migration create the qbr on the mode switched to openvswitch. But when I switch another compute node to openvswitch and I try to live migrate the same vm (openvswitch to qopenswitch) it does not work because the qbr presence. I verified on nova logs. Ignazio
Hi Ignazio,
I think the first step - migrating from hybrid_iptables to ovs should not create the qbr on the target node. It sounds like a bug - IIRC the libvirt domxml should not have the qbr when migrating.
Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha
scritto:
On 12/03/2020 11:38, Ignazio Cassano wrote:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which
require
openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment? On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes. Do you mean security groups do not work on provider networks ? Ignazio
Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
Hello Ignazio,
Does your openstack environment using self-service network ?
I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good.
On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Jakub, I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support. Thanks Ignazio
Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < jlibosva@redhat.com> ha scritto:
Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched on openvswitch works fine . The problem is this migration create the qbr on the mode switched to openvswitch. But when I switch another compute node to openvswitch and I try to
On 13/03/2020 08:24, Ignazio Cassano wrote: live
migrate the same vm (openvswitch to qopenswitch) it does not work because the qbr presence. I verified on nova logs. Ignazio
Hi Ignazio,
I think the first step - migrating from hybrid_iptables to ovs should not create the qbr on the target node. It sounds like a bug - IIRC the libvirt domxml should not have the qbr when migrating.
Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha
scritto:
On 12/03/2020 11:38, Ignazio Cassano wrote: > Hello All, I am facing some problems migrating from iptables_hybrid > frirewall to openvswitch firewall on centos 7 queens, > I am doing this because I want enable security groups logs which
require
> openvswitch firewall. > I would like to migrate without restarting my instances. > I startded moving all instances from compute node 1. > Then I configured openvswitch firewall on compute node 1, > Instances migrated from compute node 2 to compute node 1 without problems. > Once the compute node 2 was empty, I migrated it to openvswitch. > But now instances does not migrate from node 1 to node 2 because it > requires the presence of qbr bridge on node 2 > > This happened because migrating instances from node2 with iptables_hybrid > to compute node 1 with openvswitch, does not put the tap under br-int as > requested by openvswich firewall, but qbr is still present on compute node > 1. > Once I enabled openvswitch on compute node 2, migration from compute node 1 > fails because it exprects qbr on compute node 2 . > So I think I should moving on the fly tap interfaces from qbr to br-int on > compute node 1 before migrating to compute node 2 but it is a huge work on > a lot of instances. > > Any workaround, please ? > > Ignazio >
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment?
On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes. Do you mean security groups do not work on provider networks ? Ignazio
Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
Hello Ignazio,
Does your openstack environment using self-service network ?
I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good.
On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Jakub, I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support. Thanks Ignazio
Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < jlibosva@redhat.com> ha scritto:
Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched on openvswitch works fine . The problem is this migration create the qbr on the mode switched to openvswitch. But when I switch another compute node to openvswitch and I try to
On 13/03/2020 08:24, Ignazio Cassano wrote: live
migrate the same vm (openvswitch to qopenswitch) it does not work because the qbr presence. I verified on nova logs. Ignazio
Hi Ignazio,
I think the first step - migrating from hybrid_iptables to ovs should not create the qbr on the target node. It sounds like a bug - IIRC the libvirt domxml should not have the qbr when migrating.
Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha
scritto:
> On 12/03/2020 11:38, Ignazio Cassano wrote: >> Hello All, I am facing some problems migrating from iptables_hybrid >> frirewall to openvswitch firewall on centos 7 queens, >> I am doing this because I want enable security groups logs which
>> openvswitch firewall. >> I would like to migrate without restarting my instances. >> I startded moving all instances from compute node 1. >> Then I configured openvswitch firewall on compute node 1, >> Instances migrated from compute node 2 to compute node 1 without > problems. >> Once the compute node 2 was empty, I migrated it to openvswitch. >> But now instances does not migrate from node 1 to node 2 because it >> requires the presence of qbr bridge on node 2 >> >> This happened because migrating instances from node2 with iptables_hybrid >> to compute node 1 with openvswitch, does not put the tap under br-int as >> requested by openvswich firewall, but qbr is still present on compute > node >> 1. >> Once I enabled openvswitch on compute node 2, migration from compute > node 1 >> fails because it exprects qbr on compute node 2 . >> So I think I should moving on the fly tap interfaces from qbr to br-int > on >> compute node 1 before migrating to compute node 2 but it is a huge work > on >> a lot of instances. >> >> Any workaround, please ? >> >> Ignazio >> > > I may be a little outdated here but to the best of my knowledge
> are two ways how to migrate from iptables to openvswitch. > > 1) If you don't mind the intermediate linux bridge and you care about > logs, you can just change the config file on compute node to start using > openvswitch firewall and restart the ovs agent. That should trigger a > mechanism that deletes iptables rules and starts using openflow rules. > It will leave the intermediate bridge there but except the extra hop in > networking stack, it doesn't mind. > > 2) With multiple-port binding feature, what you described above should > be working. I know Miguel spent some time working on that so
require there perhaps he
> has more information about which release it should be functional at, I > think it was Queens. Not sure if any Nova work was required to make it > work. > > Hope that helps. > Kuba > > > >
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
One problem which I got few days ago. I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work. On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment?
On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes. Do you mean security groups do not work on provider networks ? Ignazio
Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
Hello Ignazio,
Does your openstack environment using self-service network ?
I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good.
On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Jakub, I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support. Thanks Ignazio
Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < jlibosva@redhat.com> ha scritto:
On 13/03/2020 08:24, Ignazio Cassano wrote: > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched > on openvswitch works fine . The problem is this migration create the qbr on > the mode switched to openvswitch. > But when I switch another compute node to openvswitch and I try to live > migrate the same vm (openvswitch to qopenswitch) it does not work because > the qbr presence. > I verified on nova logs. > Ignazio
Hi Ignazio,
I think the first step - migrating from hybrid_iptables to ovs should not create the qbr on the target node. It sounds like a bug - IIRC the libvirt domxml should not have the qbr when migrating.
> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha scritto: > >> On 12/03/2020 11:38, Ignazio Cassano wrote: >>> Hello All, I am facing some problems migrating from iptables_hybrid >>> frirewall to openvswitch firewall on centos 7 queens, >>> I am doing this because I want enable security groups logs which require >>> openvswitch firewall. >>> I would like to migrate without restarting my instances. >>> I startded moving all instances from compute node 1. >>> Then I configured openvswitch firewall on compute node 1, >>> Instances migrated from compute node 2 to compute node 1 without >> problems. >>> Once the compute node 2 was empty, I migrated it to openvswitch. >>> But now instances does not migrate from node 1 to node 2 because it >>> requires the presence of qbr bridge on node 2 >>> >>> This happened because migrating instances from node2 with iptables_hybrid >>> to compute node 1 with openvswitch, does not put the tap under br-int as >>> requested by openvswich firewall, but qbr is still present on compute >> node >>> 1. >>> Once I enabled openvswitch on compute node 2, migration from compute >> node 1 >>> fails because it exprects qbr on compute node 2 . >>> So I think I should moving on the fly tap interfaces from qbr to br-int >> on >>> compute node 1 before migrating to compute node 2 but it is a huge work >> on >>> a lot of instances. >>> >>> Any workaround, please ? >>> >>> Ignazio >>> >> >> I may be a little outdated here but to the best of my knowledge there >> are two ways how to migrate from iptables to openvswitch. >> >> 1) If you don't mind the intermediate linux bridge and you care about >> logs, you can just change the config file on compute node to start using >> openvswitch firewall and restart the ovs agent. That should trigger a >> mechanism that deletes iptables rules and starts using openflow rules. >> It will leave the intermediate bridge there but except the extra hop in >> networking stack, it doesn't mind. >> >> 2) With multiple-port binding feature, what you described above should >> be working. I know Miguel spent some time working on that so perhaps he >> has more information about which release it should be functional at, I >> think it was Queens. Not sure if any Nova work was required to make it >> work. >> >> Hope that helps. >> Kuba >> >> >> >> >
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs. https://docs.openstack.org/neutron/queens/admin/config-logging.html Ignazio Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment?
On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes. Do you mean security groups do not work on provider networks ? Ignazio
Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
Hello Ignazio,
Does your openstack environment using self-service network ?
I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good.
On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Jakub, I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support. Thanks Ignazio
Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < jlibosva@redhat.com> ha scritto:
> On 13/03/2020 08:24, Ignazio Cassano wrote: > > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node > switched > > on openvswitch works fine . The problem is this migration create > the qbr on > > the mode switched to openvswitch. > > But when I switch another compute node to openvswitch and I try to > live > > migrate the same vm (openvswitch to qopenswitch) it does not work > because > > the qbr presence. > > I verified on nova logs. > > Ignazio > > Hi Ignazio, > > I think the first step - migrating from hybrid_iptables to ovs should > not create the qbr on the target node. It sounds like a bug - IIRC > the > libvirt domxml should not have the qbr when migrating. > > > > > > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha > scritto: > > > >> On 12/03/2020 11:38, Ignazio Cassano wrote: > >>> Hello All, I am facing some problems migrating from > iptables_hybrid > >>> frirewall to openvswitch firewall on centos 7 queens, > >>> I am doing this because I want enable security groups logs which > require > >>> openvswitch firewall. > >>> I would like to migrate without restarting my instances. > >>> I startded moving all instances from compute node 1. > >>> Then I configured openvswitch firewall on compute node 1, > >>> Instances migrated from compute node 2 to compute node 1 without > >> problems. > >>> Once the compute node 2 was empty, I migrated it to openvswitch. > >>> But now instances does not migrate from node 1 to node 2 because > it > >>> requires the presence of qbr bridge on node 2 > >>> > >>> This happened because migrating instances from node2 with > iptables_hybrid > >>> to compute node 1 with openvswitch, does not put the tap under > br-int as > >>> requested by openvswich firewall, but qbr is still present on > compute > >> node > >>> 1. > >>> Once I enabled openvswitch on compute node 2, migration from > compute > >> node 1 > >>> fails because it exprects qbr on compute node 2 . > >>> So I think I should moving on the fly tap interfaces from qbr to > br-int > >> on > >>> compute node 1 before migrating to compute node 2 but it is a > huge work > >> on > >>> a lot of instances. > >>> > >>> Any workaround, please ? > >>> > >>> Ignazio > >>> > >> > >> I may be a little outdated here but to the best of my knowledge > there > >> are two ways how to migrate from iptables to openvswitch. > >> > >> 1) If you don't mind the intermediate linux bridge and you care > about > >> logs, you can just change the config file on compute node to > start using > >> openvswitch firewall and restart the ovs agent. That should > trigger a > >> mechanism that deletes iptables rules and starts using openflow > rules. > >> It will leave the intermediate bridge there but except the extra > hop in > >> networking stack, it doesn't mind. > >> > >> 2) With multiple-port binding feature, what you described above > should > >> be working. I know Miguel spent some time working on that so > perhaps he > >> has more information about which release it should be functional > at, I > >> think it was Queens. Not sure if any Nova work was required to > make it > >> work. > >> > >> Hope that helps. > >> Kuba > >> > >> > >> > >> > > > >
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
I just use Openvswitch for firewall driver. I did not use log plugin. You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule. On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.
https://docs.openstack.org/neutron/queens/admin/config-logging.html
Ignazio
Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment?
On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes. Do you mean security groups do not work on provider networks ? Ignazio
Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
Hello Ignazio,
Does your openstack environment using self-service network ?
I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good.
On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
> Hello Jakub, > I will try again but if there is a bug on queens I do not think it > will be corrected because is going out of support. > Thanks > Ignazio > > Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < > jlibosva@redhat.com> ha scritto: > >> On 13/03/2020 08:24, Ignazio Cassano wrote: >> > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node >> switched >> > on openvswitch works fine . The problem is this migration create >> the qbr on >> > the mode switched to openvswitch. >> > But when I switch another compute node to openvswitch and I try >> to live >> > migrate the same vm (openvswitch to qopenswitch) it does not work >> because >> > the qbr presence. >> > I verified on nova logs. >> > Ignazio >> >> Hi Ignazio, >> >> I think the first step - migrating from hybrid_iptables to ovs >> should >> not create the qbr on the target node. It sounds like a bug - IIRC >> the >> libvirt domxml should not have the qbr when migrating. >> >> >> > >> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> >> ha scritto: >> > >> >> On 12/03/2020 11:38, Ignazio Cassano wrote: >> >>> Hello All, I am facing some problems migrating from >> iptables_hybrid >> >>> frirewall to openvswitch firewall on centos 7 queens, >> >>> I am doing this because I want enable security groups logs >> which require >> >>> openvswitch firewall. >> >>> I would like to migrate without restarting my instances. >> >>> I startded moving all instances from compute node 1. >> >>> Then I configured openvswitch firewall on compute node 1, >> >>> Instances migrated from compute node 2 to compute node 1 without >> >> problems. >> >>> Once the compute node 2 was empty, I migrated it to openvswitch. >> >>> But now instances does not migrate from node 1 to node 2 >> because it >> >>> requires the presence of qbr bridge on node 2 >> >>> >> >>> This happened because migrating instances from node2 with >> iptables_hybrid >> >>> to compute node 1 with openvswitch, does not put the tap under >> br-int as >> >>> requested by openvswich firewall, but qbr is still present on >> compute >> >> node >> >>> 1. >> >>> Once I enabled openvswitch on compute node 2, migration from >> compute >> >> node 1 >> >>> fails because it exprects qbr on compute node 2 . >> >>> So I think I should moving on the fly tap interfaces from qbr >> to br-int >> >> on >> >>> compute node 1 before migrating to compute node 2 but it is a >> huge work >> >> on >> >>> a lot of instances. >> >>> >> >>> Any workaround, please ? >> >>> >> >>> Ignazio >> >>> >> >> >> >> I may be a little outdated here but to the best of my knowledge >> there >> >> are two ways how to migrate from iptables to openvswitch. >> >> >> >> 1) If you don't mind the intermediate linux bridge and you care >> about >> >> logs, you can just change the config file on compute node to >> start using >> >> openvswitch firewall and restart the ovs agent. That should >> trigger a >> >> mechanism that deletes iptables rules and starts using openflow >> rules. >> >> It will leave the intermediate bridge there but except the extra >> hop in >> >> networking stack, it doesn't mind. >> >> >> >> 2) With multiple-port binding feature, what you described above >> should >> >> be working. I know Miguel spent some time working on that so >> perhaps he >> >> has more information about which release it should be functional >> at, I >> >> think it was Queens. Not sure if any Nova work was required to >> make it >> >> work. >> >> >> >> Hope that helps. >> >> Kuba >> >> >> >> >> >> >> >> >> > >> >>
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Sorry, I mean I added ssh access and then I removed it Openviswitch is a requirement for security group logs. So , if you read at the documentation, it suggests to modify iptables_hybrid on neutron node as well. 1 month ago I addes a compute node with openvswitch on an openstack with iptables_hybrid on neutron node: it did not worked until I modified the neutron node. I do not know why Il giorno sab 21 mar 2020 alle ore 15:57 Sa Pham <saphi070@gmail.com> ha scritto:
I just use Openvswitch for firewall driver. I did not use log plugin.
You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule.
On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.
https://docs.openstack.org/neutron/queens/admin/config-logging.html
Ignazio
Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment?
On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes. Do you mean security groups do not work on provider networks ? Ignazio
Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
> Hello Ignazio, > > Does your openstack environment using self-service network ? > > I have tried openvswitch firewall native with openstack queens > version using provider network. But It's not working good. > > > > On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < > ignaziocassano@gmail.com> wrote: > >> Hello Jakub, >> I will try again but if there is a bug on queens I do not think it >> will be corrected because is going out of support. >> Thanks >> Ignazio >> >> Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < >> jlibosva@redhat.com> ha scritto: >> >>> On 13/03/2020 08:24, Ignazio Cassano wrote: >>> > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node >>> switched >>> > on openvswitch works fine . The problem is this migration create >>> the qbr on >>> > the mode switched to openvswitch. >>> > But when I switch another compute node to openvswitch and I try >>> to live >>> > migrate the same vm (openvswitch to qopenswitch) it does not >>> work because >>> > the qbr presence. >>> > I verified on nova logs. >>> > Ignazio >>> >>> Hi Ignazio, >>> >>> I think the first step - migrating from hybrid_iptables to ovs >>> should >>> not create the qbr on the target node. It sounds like a bug - IIRC >>> the >>> libvirt domxml should not have the qbr when migrating. >>> >>> >>> > >>> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> >>> ha scritto: >>> > >>> >> On 12/03/2020 11:38, Ignazio Cassano wrote: >>> >>> Hello All, I am facing some problems migrating from >>> iptables_hybrid >>> >>> frirewall to openvswitch firewall on centos 7 queens, >>> >>> I am doing this because I want enable security groups logs >>> which require >>> >>> openvswitch firewall. >>> >>> I would like to migrate without restarting my instances. >>> >>> I startded moving all instances from compute node 1. >>> >>> Then I configured openvswitch firewall on compute node 1, >>> >>> Instances migrated from compute node 2 to compute node 1 >>> without >>> >> problems. >>> >>> Once the compute node 2 was empty, I migrated it to >>> openvswitch. >>> >>> But now instances does not migrate from node 1 to node 2 >>> because it >>> >>> requires the presence of qbr bridge on node 2 >>> >>> >>> >>> This happened because migrating instances from node2 with >>> iptables_hybrid >>> >>> to compute node 1 with openvswitch, does not put the tap under >>> br-int as >>> >>> requested by openvswich firewall, but qbr is still present on >>> compute >>> >> node >>> >>> 1. >>> >>> Once I enabled openvswitch on compute node 2, migration from >>> compute >>> >> node 1 >>> >>> fails because it exprects qbr on compute node 2 . >>> >>> So I think I should moving on the fly tap interfaces from qbr >>> to br-int >>> >> on >>> >>> compute node 1 before migrating to compute node 2 but it is a >>> huge work >>> >> on >>> >>> a lot of instances. >>> >>> >>> >>> Any workaround, please ? >>> >>> >>> >>> Ignazio >>> >>> >>> >> >>> >> I may be a little outdated here but to the best of my knowledge >>> there >>> >> are two ways how to migrate from iptables to openvswitch. >>> >> >>> >> 1) If you don't mind the intermediate linux bridge and you care >>> about >>> >> logs, you can just change the config file on compute node to >>> start using >>> >> openvswitch firewall and restart the ovs agent. That should >>> trigger a >>> >> mechanism that deletes iptables rules and starts using openflow >>> rules. >>> >> It will leave the intermediate bridge there but except the >>> extra hop in >>> >> networking stack, it doesn't mind. >>> >> >>> >> 2) With multiple-port binding feature, what you described above >>> should >>> >> be working. I know Miguel spent some time working on that so >>> perhaps he >>> >> has more information about which release it should be >>> functional at, I >>> >> think it was Queens. Not sure if any Nova work was required to >>> make it >>> >> work. >>> >> >>> >> Hope that helps. >>> >> Kuba >>> >> >>> >> >>> >> >>> >> >>> > >>> >>> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 > > >
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Which configuration did you use? Or You configured log plugin in neutron node? On Sat, Mar 21, 2020 at 10:02 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sorry, I mean I added ssh access and then I removed it Openviswitch is a requirement for security group logs. So , if you read at the documentation, it suggests to modify iptables_hybrid on neutron node as well.
1 month ago I addes a compute node with openvswitch on an openstack with iptables_hybrid on neutron node: it did not worked until I modified the neutron node. I do not know why
Il giorno sab 21 mar 2020 alle ore 15:57 Sa Pham <saphi070@gmail.com> ha scritto:
I just use Openvswitch for firewall driver. I did not use log plugin.
You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule.
On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.
https://docs.openstack.org/neutron/queens/admin/config-logging.html
Ignazio
Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment?
On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
> Hello Sa, I am using self service and provider networks.It works > fine in both cases. The problem is the migration from iptables hybrid to > openvswitch without rebooting instanes. > Do you mean security groups do not work on provider networks ? > Ignazio > > > Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto: > >> Hello Ignazio, >> >> Does your openstack environment using self-service network ? >> >> I have tried openvswitch firewall native with openstack queens >> version using provider network. But It's not working good. >> >> >> >> On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < >> ignaziocassano@gmail.com> wrote: >> >>> Hello Jakub, >>> I will try again but if there is a bug on queens I do not think it >>> will be corrected because is going out of support. >>> Thanks >>> Ignazio >>> >>> Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < >>> jlibosva@redhat.com> ha scritto: >>> >>>> On 13/03/2020 08:24, Ignazio Cassano wrote: >>>> > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a >>>> node switched >>>> > on openvswitch works fine . The problem is this migration >>>> create the qbr on >>>> > the mode switched to openvswitch. >>>> > But when I switch another compute node to openvswitch and I try >>>> to live >>>> > migrate the same vm (openvswitch to qopenswitch) it does not >>>> work because >>>> > the qbr presence. >>>> > I verified on nova logs. >>>> > Ignazio >>>> >>>> Hi Ignazio, >>>> >>>> I think the first step - migrating from hybrid_iptables to ovs >>>> should >>>> not create the qbr on the target node. It sounds like a bug - >>>> IIRC the >>>> libvirt domxml should not have the qbr when migrating. >>>> >>>> >>>> > >>>> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> >>>> ha scritto: >>>> > >>>> >> On 12/03/2020 11:38, Ignazio Cassano wrote: >>>> >>> Hello All, I am facing some problems migrating from >>>> iptables_hybrid >>>> >>> frirewall to openvswitch firewall on centos 7 queens, >>>> >>> I am doing this because I want enable security groups logs >>>> which require >>>> >>> openvswitch firewall. >>>> >>> I would like to migrate without restarting my instances. >>>> >>> I startded moving all instances from compute node 1. >>>> >>> Then I configured openvswitch firewall on compute node 1, >>>> >>> Instances migrated from compute node 2 to compute node 1 >>>> without >>>> >> problems. >>>> >>> Once the compute node 2 was empty, I migrated it to >>>> openvswitch. >>>> >>> But now instances does not migrate from node 1 to node 2 >>>> because it >>>> >>> requires the presence of qbr bridge on node 2 >>>> >>> >>>> >>> This happened because migrating instances from node2 with >>>> iptables_hybrid >>>> >>> to compute node 1 with openvswitch, does not put the tap >>>> under br-int as >>>> >>> requested by openvswich firewall, but qbr is still present >>>> on compute >>>> >> node >>>> >>> 1. >>>> >>> Once I enabled openvswitch on compute node 2, migration from >>>> compute >>>> >> node 1 >>>> >>> fails because it exprects qbr on compute node 2 . >>>> >>> So I think I should moving on the fly tap interfaces from qbr >>>> to br-int >>>> >> on >>>> >>> compute node 1 before migrating to compute node 2 but it is a >>>> huge work >>>> >> on >>>> >>> a lot of instances. >>>> >>> >>>> >>> Any workaround, please ? >>>> >>> >>>> >>> Ignazio >>>> >>> >>>> >> >>>> >> I may be a little outdated here but to the best of my >>>> knowledge there >>>> >> are two ways how to migrate from iptables to openvswitch. >>>> >> >>>> >> 1) If you don't mind the intermediate linux bridge and you >>>> care about >>>> >> logs, you can just change the config file on compute node to >>>> start using >>>> >> openvswitch firewall and restart the ovs agent. That should >>>> trigger a >>>> >> mechanism that deletes iptables rules and starts using >>>> openflow rules. >>>> >> It will leave the intermediate bridge there but except the >>>> extra hop in >>>> >> networking stack, it doesn't mind. >>>> >> >>>> >> 2) With multiple-port binding feature, what you described >>>> above should >>>> >> be working. I know Miguel spent some time working on that so >>>> perhaps he >>>> >> has more information about which release it should be >>>> functional at, I >>>> >> think it was Queens. Not sure if any Nova work was required to >>>> make it >>>> >> work. >>>> >> >>>> >> Hope that helps. >>>> >> Kuba >>>> >> >>>> >> >>>> >> >>>> >> >>>> > >>>> >>>> >> >> -- >> Sa Pham Dang >> Skype: great_bn >> Phone/Telegram: 0986.849.582 >> >> >>
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
I followed exactly the link I sent you. Il Sab 21 Mar 2020, 16:35 Sa Pham <saphi070@gmail.com> ha scritto:
Which configuration did you use? Or You configured log plugin in neutron node?
On Sat, Mar 21, 2020 at 10:02 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sorry, I mean I added ssh access and then I removed it Openviswitch is a requirement for security group logs. So , if you read at the documentation, it suggests to modify iptables_hybrid on neutron node as well.
1 month ago I addes a compute node with openvswitch on an openstack with iptables_hybrid on neutron node: it did not worked until I modified the neutron node. I do not know why
Il giorno sab 21 mar 2020 alle ore 15:57 Sa Pham <saphi070@gmail.com> ha scritto:
I just use Openvswitch for firewall driver. I did not use log plugin.
You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule.
On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.
https://docs.openstack.org/neutron/queens/admin/config-logging.html
Ignazio
Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
> With VM uses provider network directly, When I hard reboot that VM, > I cannot reach that VM again. Can you test in your environment? > > On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < > ignaziocassano@gmail.com> wrote: > >> Hello Sa, I am using self service and provider networks.It works >> fine in both cases. The problem is the migration from iptables hybrid to >> openvswitch without rebooting instanes. >> Do you mean security groups do not work on provider networks ? >> Ignazio >> >> >> Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto: >> >>> Hello Ignazio, >>> >>> Does your openstack environment using self-service network ? >>> >>> I have tried openvswitch firewall native with openstack queens >>> version using provider network. But It's not working good. >>> >>> >>> >>> On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < >>> ignaziocassano@gmail.com> wrote: >>> >>>> Hello Jakub, >>>> I will try again but if there is a bug on queens I do not think >>>> it will be corrected because is going out of support. >>>> Thanks >>>> Ignazio >>>> >>>> Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < >>>> jlibosva@redhat.com> ha scritto: >>>> >>>>> On 13/03/2020 08:24, Ignazio Cassano wrote: >>>>> > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a >>>>> node switched >>>>> > on openvswitch works fine . The problem is this migration >>>>> create the qbr on >>>>> > the mode switched to openvswitch. >>>>> > But when I switch another compute node to openvswitch and I >>>>> try to live >>>>> > migrate the same vm (openvswitch to qopenswitch) it does not >>>>> work because >>>>> > the qbr presence. >>>>> > I verified on nova logs. >>>>> > Ignazio >>>>> >>>>> Hi Ignazio, >>>>> >>>>> I think the first step - migrating from hybrid_iptables to ovs >>>>> should >>>>> not create the qbr on the target node. It sounds like a bug - >>>>> IIRC the >>>>> libvirt domxml should not have the qbr when migrating. >>>>> >>>>> >>>>> > >>>>> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> >>>>> ha scritto: >>>>> > >>>>> >> On 12/03/2020 11:38, Ignazio Cassano wrote: >>>>> >>> Hello All, I am facing some problems migrating from >>>>> iptables_hybrid >>>>> >>> frirewall to openvswitch firewall on centos 7 queens, >>>>> >>> I am doing this because I want enable security groups logs >>>>> which require >>>>> >>> openvswitch firewall. >>>>> >>> I would like to migrate without restarting my instances. >>>>> >>> I startded moving all instances from compute node 1. >>>>> >>> Then I configured openvswitch firewall on compute node 1, >>>>> >>> Instances migrated from compute node 2 to compute node 1 >>>>> without >>>>> >> problems. >>>>> >>> Once the compute node 2 was empty, I migrated it to >>>>> openvswitch. >>>>> >>> But now instances does not migrate from node 1 to node 2 >>>>> because it >>>>> >>> requires the presence of qbr bridge on node 2 >>>>> >>> >>>>> >>> This happened because migrating instances from node2 with >>>>> iptables_hybrid >>>>> >>> to compute node 1 with openvswitch, does not put the tap >>>>> under br-int as >>>>> >>> requested by openvswich firewall, but qbr is still present >>>>> on compute >>>>> >> node >>>>> >>> 1. >>>>> >>> Once I enabled openvswitch on compute node 2, migration from >>>>> compute >>>>> >> node 1 >>>>> >>> fails because it exprects qbr on compute node 2 . >>>>> >>> So I think I should moving on the fly tap interfaces from >>>>> qbr to br-int >>>>> >> on >>>>> >>> compute node 1 before migrating to compute node 2 but it is >>>>> a huge work >>>>> >> on >>>>> >>> a lot of instances. >>>>> >>> >>>>> >>> Any workaround, please ? >>>>> >>> >>>>> >>> Ignazio >>>>> >>> >>>>> >> >>>>> >> I may be a little outdated here but to the best of my >>>>> knowledge there >>>>> >> are two ways how to migrate from iptables to openvswitch. >>>>> >> >>>>> >> 1) If you don't mind the intermediate linux bridge and you >>>>> care about >>>>> >> logs, you can just change the config file on compute node to >>>>> start using >>>>> >> openvswitch firewall and restart the ovs agent. That should >>>>> trigger a >>>>> >> mechanism that deletes iptables rules and starts using >>>>> openflow rules. >>>>> >> It will leave the intermediate bridge there but except the >>>>> extra hop in >>>>> >> networking stack, it doesn't mind. >>>>> >> >>>>> >> 2) With multiple-port binding feature, what you described >>>>> above should >>>>> >> be working. I know Miguel spent some time working on that so >>>>> perhaps he >>>>> >> has more information about which release it should be >>>>> functional at, I >>>>> >> think it was Queens. Not sure if any Nova work was required >>>>> to make it >>>>> >> work. >>>>> >> >>>>> >> Hope that helps. >>>>> >> Kuba >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> > >>>>> >>>>> >>> >>> -- >>> Sa Pham Dang >>> Skype: great_bn >>> Phone/Telegram: 0986.849.582 >>> >>> >>> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 > > >
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
I am sorry Sam, on that link there is not all configurations you need. On neutron node you must enable in /etc/neutron/plugins/ml2/openvswitch_agent.ini under [securitygroup] : firewall_driver = openvswitch Restart the openvswitch agent on neutron node Ignazio Il giorno sab 21 mar 2020 alle ore 16:35 Sa Pham <saphi070@gmail.com> ha scritto:
Which configuration did you use? Or You configured log plugin in neutron node?
On Sat, Mar 21, 2020 at 10:02 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sorry, I mean I added ssh access and then I removed it Openviswitch is a requirement for security group logs. So , if you read at the documentation, it suggests to modify iptables_hybrid on neutron node as well.
1 month ago I addes a compute node with openvswitch on an openstack with iptables_hybrid on neutron node: it did not worked until I modified the neutron node. I do not know why
Il giorno sab 21 mar 2020 alle ore 15:57 Sa Pham <saphi070@gmail.com> ha scritto:
I just use Openvswitch for firewall driver. I did not use log plugin.
You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule.
On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.
https://docs.openstack.org/neutron/queens/admin/config-logging.html
Ignazio
Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
> With VM uses provider network directly, When I hard reboot that VM, > I cannot reach that VM again. Can you test in your environment? > > On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < > ignaziocassano@gmail.com> wrote: > >> Hello Sa, I am using self service and provider networks.It works >> fine in both cases. The problem is the migration from iptables hybrid to >> openvswitch without rebooting instanes. >> Do you mean security groups do not work on provider networks ? >> Ignazio >> >> >> Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto: >> >>> Hello Ignazio, >>> >>> Does your openstack environment using self-service network ? >>> >>> I have tried openvswitch firewall native with openstack queens >>> version using provider network. But It's not working good. >>> >>> >>> >>> On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < >>> ignaziocassano@gmail.com> wrote: >>> >>>> Hello Jakub, >>>> I will try again but if there is a bug on queens I do not think >>>> it will be corrected because is going out of support. >>>> Thanks >>>> Ignazio >>>> >>>> Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < >>>> jlibosva@redhat.com> ha scritto: >>>> >>>>> On 13/03/2020 08:24, Ignazio Cassano wrote: >>>>> > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a >>>>> node switched >>>>> > on openvswitch works fine . The problem is this migration >>>>> create the qbr on >>>>> > the mode switched to openvswitch. >>>>> > But when I switch another compute node to openvswitch and I >>>>> try to live >>>>> > migrate the same vm (openvswitch to qopenswitch) it does not >>>>> work because >>>>> > the qbr presence. >>>>> > I verified on nova logs. >>>>> > Ignazio >>>>> >>>>> Hi Ignazio, >>>>> >>>>> I think the first step - migrating from hybrid_iptables to ovs >>>>> should >>>>> not create the qbr on the target node. It sounds like a bug - >>>>> IIRC the >>>>> libvirt domxml should not have the qbr when migrating. >>>>> >>>>> >>>>> > >>>>> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> >>>>> ha scritto: >>>>> > >>>>> >> On 12/03/2020 11:38, Ignazio Cassano wrote: >>>>> >>> Hello All, I am facing some problems migrating from >>>>> iptables_hybrid >>>>> >>> frirewall to openvswitch firewall on centos 7 queens, >>>>> >>> I am doing this because I want enable security groups logs >>>>> which require >>>>> >>> openvswitch firewall. >>>>> >>> I would like to migrate without restarting my instances. >>>>> >>> I startded moving all instances from compute node 1. >>>>> >>> Then I configured openvswitch firewall on compute node 1, >>>>> >>> Instances migrated from compute node 2 to compute node 1 >>>>> without >>>>> >> problems. >>>>> >>> Once the compute node 2 was empty, I migrated it to >>>>> openvswitch. >>>>> >>> But now instances does not migrate from node 1 to node 2 >>>>> because it >>>>> >>> requires the presence of qbr bridge on node 2 >>>>> >>> >>>>> >>> This happened because migrating instances from node2 with >>>>> iptables_hybrid >>>>> >>> to compute node 1 with openvswitch, does not put the tap >>>>> under br-int as >>>>> >>> requested by openvswich firewall, but qbr is still present >>>>> on compute >>>>> >> node >>>>> >>> 1. >>>>> >>> Once I enabled openvswitch on compute node 2, migration from >>>>> compute >>>>> >> node 1 >>>>> >>> fails because it exprects qbr on compute node 2 . >>>>> >>> So I think I should moving on the fly tap interfaces from >>>>> qbr to br-int >>>>> >> on >>>>> >>> compute node 1 before migrating to compute node 2 but it is >>>>> a huge work >>>>> >> on >>>>> >>> a lot of instances. >>>>> >>> >>>>> >>> Any workaround, please ? >>>>> >>> >>>>> >>> Ignazio >>>>> >>> >>>>> >> >>>>> >> I may be a little outdated here but to the best of my >>>>> knowledge there >>>>> >> are two ways how to migrate from iptables to openvswitch. >>>>> >> >>>>> >> 1) If you don't mind the intermediate linux bridge and you >>>>> care about >>>>> >> logs, you can just change the config file on compute node to >>>>> start using >>>>> >> openvswitch firewall and restart the ovs agent. That should >>>>> trigger a >>>>> >> mechanism that deletes iptables rules and starts using >>>>> openflow rules. >>>>> >> It will leave the intermediate bridge there but except the >>>>> extra hop in >>>>> >> networking stack, it doesn't mind. >>>>> >> >>>>> >> 2) With multiple-port binding feature, what you described >>>>> above should >>>>> >> be working. I know Miguel spent some time working on that so >>>>> perhaps he >>>>> >> has more information about which release it should be >>>>> functional at, I >>>>> >> think it was Queens. Not sure if any Nova work was required >>>>> to make it >>>>> >> work. >>>>> >> >>>>> >> Hope that helps. >>>>> >> Kuba >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> > >>>>> >>>>> >>> >>> -- >>> Sa Pham Dang >>> Skype: great_bn >>> Phone/Telegram: 0986.849.582 >>> >>> >>> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 > > >
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Hello Sa, have you solved ? Ignazio Il Sab 21 Mar 2020, 16:35 Sa Pham <saphi070@gmail.com> ha scritto:
Which configuration did you use? Or You configured log plugin in neutron node?
On Sat, Mar 21, 2020 at 10:02 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sorry, I mean I added ssh access and then I removed it Openviswitch is a requirement for security group logs. So , if you read at the documentation, it suggests to modify iptables_hybrid on neutron node as well.
1 month ago I addes a compute node with openvswitch on an openstack with iptables_hybrid on neutron node: it did not worked until I modified the neutron node. I do not know why
Il giorno sab 21 mar 2020 alle ore 15:57 Sa Pham <saphi070@gmail.com> ha scritto:
I just use Openvswitch for firewall driver. I did not use log plugin.
You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule.
On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.
https://docs.openstack.org/neutron/queens/admin/config-logging.html
Ignazio
Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sure, Sa. I have tested it 2 minutes ago. It works . I also changed security groups rules to allow/deny ssh access . It works also after hard reboot Ignazio
Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
> With VM uses provider network directly, When I hard reboot that VM, > I cannot reach that VM again. Can you test in your environment? > > On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < > ignaziocassano@gmail.com> wrote: > >> Hello Sa, I am using self service and provider networks.It works >> fine in both cases. The problem is the migration from iptables hybrid to >> openvswitch without rebooting instanes. >> Do you mean security groups do not work on provider networks ? >> Ignazio >> >> >> Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto: >> >>> Hello Ignazio, >>> >>> Does your openstack environment using self-service network ? >>> >>> I have tried openvswitch firewall native with openstack queens >>> version using provider network. But It's not working good. >>> >>> >>> >>> On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < >>> ignaziocassano@gmail.com> wrote: >>> >>>> Hello Jakub, >>>> I will try again but if there is a bug on queens I do not think >>>> it will be corrected because is going out of support. >>>> Thanks >>>> Ignazio >>>> >>>> Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < >>>> jlibosva@redhat.com> ha scritto: >>>> >>>>> On 13/03/2020 08:24, Ignazio Cassano wrote: >>>>> > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a >>>>> node switched >>>>> > on openvswitch works fine . The problem is this migration >>>>> create the qbr on >>>>> > the mode switched to openvswitch. >>>>> > But when I switch another compute node to openvswitch and I >>>>> try to live >>>>> > migrate the same vm (openvswitch to qopenswitch) it does not >>>>> work because >>>>> > the qbr presence. >>>>> > I verified on nova logs. >>>>> > Ignazio >>>>> >>>>> Hi Ignazio, >>>>> >>>>> I think the first step - migrating from hybrid_iptables to ovs >>>>> should >>>>> not create the qbr on the target node. It sounds like a bug - >>>>> IIRC the >>>>> libvirt domxml should not have the qbr when migrating. >>>>> >>>>> >>>>> > >>>>> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> >>>>> ha scritto: >>>>> > >>>>> >> On 12/03/2020 11:38, Ignazio Cassano wrote: >>>>> >>> Hello All, I am facing some problems migrating from >>>>> iptables_hybrid >>>>> >>> frirewall to openvswitch firewall on centos 7 queens, >>>>> >>> I am doing this because I want enable security groups logs >>>>> which require >>>>> >>> openvswitch firewall. >>>>> >>> I would like to migrate without restarting my instances. >>>>> >>> I startded moving all instances from compute node 1. >>>>> >>> Then I configured openvswitch firewall on compute node 1, >>>>> >>> Instances migrated from compute node 2 to compute node 1 >>>>> without >>>>> >> problems. >>>>> >>> Once the compute node 2 was empty, I migrated it to >>>>> openvswitch. >>>>> >>> But now instances does not migrate from node 1 to node 2 >>>>> because it >>>>> >>> requires the presence of qbr bridge on node 2 >>>>> >>> >>>>> >>> This happened because migrating instances from node2 with >>>>> iptables_hybrid >>>>> >>> to compute node 1 with openvswitch, does not put the tap >>>>> under br-int as >>>>> >>> requested by openvswich firewall, but qbr is still present >>>>> on compute >>>>> >> node >>>>> >>> 1. >>>>> >>> Once I enabled openvswitch on compute node 2, migration from >>>>> compute >>>>> >> node 1 >>>>> >>> fails because it exprects qbr on compute node 2 . >>>>> >>> So I think I should moving on the fly tap interfaces from >>>>> qbr to br-int >>>>> >> on >>>>> >>> compute node 1 before migrating to compute node 2 but it is >>>>> a huge work >>>>> >> on >>>>> >>> a lot of instances. >>>>> >>> >>>>> >>> Any workaround, please ? >>>>> >>> >>>>> >>> Ignazio >>>>> >>> >>>>> >> >>>>> >> I may be a little outdated here but to the best of my >>>>> knowledge there >>>>> >> are two ways how to migrate from iptables to openvswitch. >>>>> >> >>>>> >> 1) If you don't mind the intermediate linux bridge and you >>>>> care about >>>>> >> logs, you can just change the config file on compute node to >>>>> start using >>>>> >> openvswitch firewall and restart the ovs agent. That should >>>>> trigger a >>>>> >> mechanism that deletes iptables rules and starts using >>>>> openflow rules. >>>>> >> It will leave the intermediate bridge there but except the >>>>> extra hop in >>>>> >> networking stack, it doesn't mind. >>>>> >> >>>>> >> 2) With multiple-port binding feature, what you described >>>>> above should >>>>> >> be working. I know Miguel spent some time working on that so >>>>> perhaps he >>>>> >> has more information about which release it should be >>>>> functional at, I >>>>> >> think it was Queens. Not sure if any Nova work was required >>>>> to make it >>>>> >> work. >>>>> >> >>>>> >> Hope that helps. >>>>> >> Kuba >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> > >>>>> >>>>> >>> >>> -- >>> Sa Pham Dang >>> Skype: great_bn >>> Phone/Telegram: 0986.849.582 >>> >>> >>> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 > > >
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Hello Ignazio, I havent tried it yet. I will test this case on this week. On Sun, Mar 22, 2020 at 11:46 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Sa, have you solved ? Ignazio
Il Sab 21 Mar 2020, 16:35 Sa Pham <saphi070@gmail.com> ha scritto:
Which configuration did you use? Or You configured log plugin in neutron node?
On Sat, Mar 21, 2020 at 10:02 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sorry, I mean I added ssh access and then I removed it Openviswitch is a requirement for security group logs. So , if you read at the documentation, it suggests to modify iptables_hybrid on neutron node as well.
1 month ago I addes a compute node with openvswitch on an openstack with iptables_hybrid on neutron node: it did not worked until I modified the neutron node. I do not know why
Il giorno sab 21 mar 2020 alle ore 15:57 Sa Pham <saphi070@gmail.com> ha scritto:
I just use Openvswitch for firewall driver. I did not use log plugin.
You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule.
On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ? I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.
https://docs.openstack.org/neutron/queens/admin/config-logging.html
Ignazio
Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago.
I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. I couldn't reach that VM any more. I tried to reboot or hard reboot that VM but It didn't work.
On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano < ignaziocassano@gmail.com> wrote:
> Sure, Sa. > I have tested it 2 minutes ago. > It works . > I also changed security groups rules to allow/deny ssh access . It > works also after hard reboot > Ignazio > > Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> > ha scritto: > >> With VM uses provider network directly, When I hard reboot that VM, >> I cannot reach that VM again. Can you test in your environment? >> >> On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano < >> ignaziocassano@gmail.com> wrote: >> >>> Hello Sa, I am using self service and provider networks.It works >>> fine in both cases. The problem is the migration from iptables hybrid to >>> openvswitch without rebooting instanes. >>> Do you mean security groups do not work on provider networks ? >>> Ignazio >>> >>> >>> Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto: >>> >>>> Hello Ignazio, >>>> >>>> Does your openstack environment using self-service network ? >>>> >>>> I have tried openvswitch firewall native with openstack queens >>>> version using provider network. But It's not working good. >>>> >>>> >>>> >>>> On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano < >>>> ignaziocassano@gmail.com> wrote: >>>> >>>>> Hello Jakub, >>>>> I will try again but if there is a bug on queens I do not think >>>>> it will be corrected because is going out of support. >>>>> Thanks >>>>> Ignazio >>>>> >>>>> Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar < >>>>> jlibosva@redhat.com> ha scritto: >>>>> >>>>>> On 13/03/2020 08:24, Ignazio Cassano wrote: >>>>>> > Hu Jakub, migrating vm from a not with hybrid_itatabes ti a >>>>>> node switched >>>>>> > on openvswitch works fine . The problem is this migration >>>>>> create the qbr on >>>>>> > the mode switched to openvswitch. >>>>>> > But when I switch another compute node to openvswitch and I >>>>>> try to live >>>>>> > migrate the same vm (openvswitch to qopenswitch) it does not >>>>>> work because >>>>>> > the qbr presence. >>>>>> > I verified on nova logs. >>>>>> > Ignazio >>>>>> >>>>>> Hi Ignazio, >>>>>> >>>>>> I think the first step - migrating from hybrid_iptables to ovs >>>>>> should >>>>>> not create the qbr on the target node. It sounds like a bug - >>>>>> IIRC the >>>>>> libvirt domxml should not have the qbr when migrating. >>>>>> >>>>>> >>>>>> > >>>>>> > Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> >>>>>> ha scritto: >>>>>> > >>>>>> >> On 12/03/2020 11:38, Ignazio Cassano wrote: >>>>>> >>> Hello All, I am facing some problems migrating from >>>>>> iptables_hybrid >>>>>> >>> frirewall to openvswitch firewall on centos 7 queens, >>>>>> >>> I am doing this because I want enable security groups logs >>>>>> which require >>>>>> >>> openvswitch firewall. >>>>>> >>> I would like to migrate without restarting my instances. >>>>>> >>> I startded moving all instances from compute node 1. >>>>>> >>> Then I configured openvswitch firewall on compute node 1, >>>>>> >>> Instances migrated from compute node 2 to compute node 1 >>>>>> without >>>>>> >> problems. >>>>>> >>> Once the compute node 2 was empty, I migrated it to >>>>>> openvswitch. >>>>>> >>> But now instances does not migrate from node 1 to node 2 >>>>>> because it >>>>>> >>> requires the presence of qbr bridge on node 2 >>>>>> >>> >>>>>> >>> This happened because migrating instances from node2 with >>>>>> iptables_hybrid >>>>>> >>> to compute node 1 with openvswitch, does not put the tap >>>>>> under br-int as >>>>>> >>> requested by openvswich firewall, but qbr is still present >>>>>> on compute >>>>>> >> node >>>>>> >>> 1. >>>>>> >>> Once I enabled openvswitch on compute node 2, migration >>>>>> from compute >>>>>> >> node 1 >>>>>> >>> fails because it exprects qbr on compute node 2 . >>>>>> >>> So I think I should moving on the fly tap interfaces from >>>>>> qbr to br-int >>>>>> >> on >>>>>> >>> compute node 1 before migrating to compute node 2 but it is >>>>>> a huge work >>>>>> >> on >>>>>> >>> a lot of instances. >>>>>> >>> >>>>>> >>> Any workaround, please ? >>>>>> >>> >>>>>> >>> Ignazio >>>>>> >>> >>>>>> >> >>>>>> >> I may be a little outdated here but to the best of my >>>>>> knowledge there >>>>>> >> are two ways how to migrate from iptables to openvswitch. >>>>>> >> >>>>>> >> 1) If you don't mind the intermediate linux bridge and you >>>>>> care about >>>>>> >> logs, you can just change the config file on compute node to >>>>>> start using >>>>>> >> openvswitch firewall and restart the ovs agent. That should >>>>>> trigger a >>>>>> >> mechanism that deletes iptables rules and starts using >>>>>> openflow rules. >>>>>> >> It will leave the intermediate bridge there but except the >>>>>> extra hop in >>>>>> >> networking stack, it doesn't mind. >>>>>> >> >>>>>> >> 2) With multiple-port binding feature, what you described >>>>>> above should >>>>>> >> be working. I know Miguel spent some time working on that so >>>>>> perhaps he >>>>>> >> has more information about which release it should be >>>>>> functional at, I >>>>>> >> think it was Queens. Not sure if any Nova work was required >>>>>> to make it >>>>>> >> work. >>>>>> >> >>>>>> >> Hope that helps. >>>>>> >> Kuba >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> > >>>>>> >>>>>> >>>> >>>> -- >>>> Sa Pham Dang >>>> Skype: great_bn >>>> Phone/Telegram: 0986.849.582 >>>> >>>> >>>> >> >> -- >> Sa Pham Dang >> Skype: great_bn >> Phone/Telegram: 0986.849.582 >> >> >>
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582
Hello Jakub, to be honest I did not understand what multiple-port binding feature does. I guess it can help me to specify how port are created on destination host, for example during live migration and I wonder if this feature can help me to migrate from iptables-hybrid to openvswitch without restarting instances. Could you provide any example for setting multiple-port bindings, please ? Ignazio Il giorno gio 12 mar 2020 alle ore 23:15 Jakub Libosvar <jlibosva@redhat.com> ha scritto:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without
On 12/03/2020 11:38, Ignazio Cassano wrote: problems.
Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
Hello Jakub, in my previous email I asked about multiple port binding but I forgot to discuss about the point 1 of your email (qbr). I do not mind about intermediate bridge but the following is what happen: A) evacuate node 1 and after changing its openvswitch agent configuration and restarting it, I can migrate vm1 form node 2. Intermediate qbr bridge is created for vm1 on node 1 B) evacuate node 2 and after changing its openvswitch agent configuration and restarting it, live migrating vm1 form node 1 to mode 2 does not works because no qbr is created on node 2 (this is reported on nova logs) Probably without restarting instances live migration works from hybrid_iptables to openvswitch but does not work fron openvswitch to openvswitch. If the first migration is not live, qbr is not created and all works fine. Regards Ignazio Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha scritto:
Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without
On 12/03/2020 11:38, Ignazio Cassano wrote: problems.
Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2
This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances.
Any workaround, please ?
Ignazio
I may be a little outdated here but to the best of my knowledge there are two ways how to migrate from iptables to openvswitch.
1) If you don't mind the intermediate linux bridge and you care about logs, you can just change the config file on compute node to start using openvswitch firewall and restart the ovs agent. That should trigger a mechanism that deletes iptables rules and starts using openflow rules. It will leave the intermediate bridge there but except the extra hop in networking stack, it doesn't mind.
2) With multiple-port binding feature, what you described above should be working. I know Miguel spent some time working on that so perhaps he has more information about which release it should be functional at, I think it was Queens. Not sure if any Nova work was required to make it work.
Hope that helps. Kuba
participants (5)
-
Ignazio Cassano
-
Jakub Libosvar
-
James Denton
-
Sa Pham
-
Slawek Kaplonski