Which configuration did you use? Or You configured log plugin in neutron node?

On Sat, Mar 21, 2020 at 10:02 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sorry, I mean I added ssh access and then I removed it
Openviswitch is a requirement for security group logs.
So , if you read at the documentation, it suggests to modify iptables_hybrid on neutron node as well.

1 month ago I addes a compute node with openvswitch on an openstack with iptables_hybrid on neutron node: it did not worked until I modified the neutron node. I do not know why



Il giorno sab 21 mar 2020 alle ore 15:57 Sa Pham <saphi070@gmail.com> ha scritto:
I just use Openvswitch for firewall driver. I did not use log plugin. 

You said you conffigured sec group rules to allow and deny. As I know, Security group cannot add deny rule. 

On Sat, Mar 21, 2020 at 9:53 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sa, have you modified only the compute node side ?
I've modified also the controller node (neutron node) side ad reported in documentation for enabling security groups logs.


Ignazio



Il giorno sab 21 mar 2020 alle ore 15:49 Sa Pham <saphi070@gmail.com> ha scritto:
One problem which I got few days ago. 

I have existing openstack with iptables_hybrid. I changed the firewall driver to openvswitch then restart neutron-openvswitch-agent. 
I couldn't reach that VM any more. I tried  to reboot or hard reboot that VM but It didn't work.



On Sat, Mar 21, 2020 at 9:41 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Sure, Sa.
I have tested it 2 minutes ago.
It works .
I also changed security groups rules to allow/deny ssh access . It works also after hard reboot
Ignazio

Il giorno sab 21 mar 2020 alle ore 14:22 Sa Pham <saphi070@gmail.com> ha scritto:
With VM uses provider network directly, When I hard reboot that VM, I cannot reach that VM again. Can you test in your environment?

On Sat, Mar 21, 2020 at 7:33 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Sa, I am using self service and provider networks.It works fine in both cases. The problem is the migration from iptables hybrid to openvswitch without rebooting instanes.
Do you mean security groups do not work on provider networks ?
Ignazio


Il Sab 21 Mar 2020, 12:38 Sa Pham <saphi070@gmail.com> ha scritto:
Hello Ignazio,

Does your openstack environment  using self-service network ?

I have tried openvswitch firewall native with openstack queens version using provider network. But It's not working good.



On Thu, Mar 19, 2020 at 11:12 PM Ignazio Cassano <ignaziocassano@gmail.com> wrote:
Hello Jakub,
I will try again but if there is a bug on queens I do not think it will be corrected because is going out of support.
Thanks
Ignazio

Il giorno gio 19 mar 2020 alle ore 13:54 Jakub Libosvar <jlibosva@redhat.com> ha scritto:
On 13/03/2020 08:24, Ignazio Cassano wrote:
> Hu Jakub, migrating vm from a not with hybrid_itatabes ti a node switched
> on openvswitch works fine . The problem is this migration create the qbr on
> the mode switched to openvswitch.
> But when I switch another compute node to openvswitch and I try to live
> migrate the same vm (openvswitch to qopenswitch) it does not work because
> the qbr presence.
> I verified on nova logs.
> Ignazio

Hi Ignazio,

I think the first step - migrating from hybrid_iptables to ovs should
not create the qbr on the target node. It sounds like a bug - IIRC the
libvirt domxml should not have the qbr when migrating.


>
> Il Gio 12 Mar 2020, 23:15 Jakub Libosvar <jlibosva@redhat.com> ha scritto:
>
>> On 12/03/2020 11:38, Ignazio Cassano wrote:
>>> Hello All, I am facing some problems migrating from iptables_hybrid
>>> frirewall to openvswitch firewall on centos 7 queens,
>>> I am doing this because I want enable security groups logs which require
>>> openvswitch firewall.
>>> I would like to migrate without restarting my instances.
>>> I startded moving all instances from compute node 1.
>>> Then I configured openvswitch firewall on compute node 1,
>>> Instances migrated from compute node 2 to compute node 1 without
>> problems.
>>> Once the compute node 2 was empty, I migrated it to openvswitch.
>>> But now instances does not migrate from node 1 to node 2 because it
>>> requires the presence of qbr bridge on node 2
>>>
>>> This happened because migrating instances from node2 with iptables_hybrid
>>> to compute node 1 with openvswitch, does not put the tap under br-int as
>>> requested by  openvswich firewall, but qbr is still present on compute
>> node
>>> 1.
>>> Once I enabled openvswitch on compute node 2, migration from compute
>> node 1
>>> fails because it exprects qbr on compute node 2 .
>>> So I think I should moving on the fly tap interfaces from qbr to br-int
>> on
>>> compute node 1 before migrating to compute node 2 but it is a huge work
>> on
>>> a lot of instances.
>>>
>>> Any workaround, please ?
>>>
>>> Ignazio
>>>
>>
>> I may be a little outdated here but to the best of my knowledge there
>> are two ways how to migrate from iptables to openvswitch.
>>
>> 1) If you don't mind the intermediate linux bridge and you care about
>> logs, you can just change the config file on compute node to start using
>> openvswitch firewall and restart the ovs agent. That should trigger a
>> mechanism that deletes iptables rules and starts using openflow rules.
>> It will leave the intermediate bridge there but except the extra hop in
>> networking stack, it doesn't mind.
>>
>> 2) With multiple-port binding feature, what you described above should
>> be working. I know Miguel spent some time working on that so perhaps he
>> has more information about which release it should be functional at, I
>> think it was Queens. Not sure if any Nova work was required to make it
>> work.
>>
>> Hope that helps.
>> Kuba
>>
>>
>>
>>
>



--
Sa Pham Dang
Skype: great_bn
Phone/Telegram: 0986.849.582




--
Sa Pham Dang
Skype: great_bn
Phone/Telegram: 0986.849.582




--
Sa Pham Dang
Skype: great_bn
Phone/Telegram: 0986.849.582




--
Sa Pham Dang
Skype: great_bn
Phone/Telegram: 0986.849.582




--
Sa Pham Dang
Skype: great_bn
Phone/Telegram: 0986.849.582