[neutron] default(ish) firewall rules

Slawek Kaplonski skaplons at redhat.com
Thu Dec 3 15:33:50 UTC 2020


Hi,

On Thu, Dec 03, 2020 at 05:12:31PM +0300, Vladimir Prokofev wrote:
> >  If You will add rule to SG as an admin user, then regular users (owners
> of the
> SG) will not be able to remove it.
> > But they will still be able to stop using this SG completly.
> 
> That's a neat trick, didn't know about it, thank you.
> 
> > What if You would plug those VMs only to the private networks and use
> Floating IPs to have public connectivity? Would that work for You?
> 
> That is an excellent solution, seeing as almost every big public cloud
> provider does it, and it did come to my mind. This was also our initial
> cloud design back a few years ago.
> Unfortunately, we had some issues with DDOS attacks back then, that flooded
> a single IP address, and that attack would completely overwhelm the network
> node that was terminating that floating IP. This, in turn, led to multiple
> other projects losing connectivity for the duration of the attack.
> At the time we looked into other solutions, particularly the one where
> floating IP terminates on compute node instead of a network node, but were
> unable to implement it, and switched to a more direct approach with public
> IPs being assigned directly to guests via a provider network.

There is DVR solution in Neutron which distributes traffic which uses Floating
IPs to the compute nodes.
And now we have also OVN driver which provides distributed routers by default :)

> So this is the best practice, yes, but this will require to rethink and
> redesign whole cloud, which is not possible at the moment. So I'm looking
> at some simpler, quick-fix style solution.

I don't know what else I could propose You. Sorry :/

> 
> чт, 3 дек. 2020 г. в 16:10, Slawek Kaplonski <skaplons at redhat.com>:
> 
> > Hi,
> >
> > On Thu, Dec 03, 2020 at 03:47:48PM +0300, Vladimir Prokofev wrote:
> > > Hello.
> > >
> > > I'm running Queens private cloud with few separate projects inside.
> > Guests
> > > in those projects have 2 networks - public, which is a provider
> > > network with public IP addresses, and private which is a VXLAN overlay
> > > network specific to the project.
> > >
> > > That's the setup, now here's the issue.
> > > They're mostly Windows guests there, and they tend to have browser
> > service
> > > enabled on both public and private networks. This leads to situations
> > where
> > > guests from one project can see guests in other projects over a public
> > > network via NetBIOS/SMB protocols, which is undesirable.
> > >
> > > I have two partial solutions in mind.
> > > Create some default firewall rule, similar to that exists by default for
> > > DHCP protocol that prohibit guests to act as DHCP server, but for UDP
> > > 137-139 port range.
> > > But not only I completely forgot how to do this(I think I saw some
> > > documentation about it ~2 years ago), but this will also block said
> > > protocol over private networks too, which is not an ideal solution. I
> > would
> > > still love it if someone could point me to a proper documentation here.
> > > Second option is to add similar entries to security group rules. This
> > will
> > > allow public/private interface differentiation by applying different
> > > security group to different interfaces, but introduces the possibility
> > for
> > > cloud operator to delete those entries(either by mistake, or explicitly)
> > > which will lead to protocol being allowed once again.
> >
> > If You will add rule to SG as an admin user, then regular users (owners of
> > the
> > SG) will not be able to remove it.
> > But they will still be able to stop using this SG completly.
> >
> > >
> > > Anyone has any idea of a better solution here?
> >
> > What if You would plug those VMs only to the private networks and use
> > Floating
> > IPs to have public connectivity? Would that work for You?
> >
> >
> > --
> > Slawek Kaplonski
> > Principal Software Engineer
> > Red Hat
> >
> >

-- 
Slawek Kaplonski
Principal Software Engineer
Red Hat




More information about the openstack-discuss mailing list