[dev][neutron] Interest for a network-level SecurityGroup binding ?
Hello Neutron Team, I'm part of a company that is currently migrating from homemade software to manage the infrastructure for a virtualization-based product towards an openstack based infrastructure. As such, this platform includes both private and public cloud usages, which brings me to our current network problem. Essentially, as administrators of the system, we want to expose a pre-configured network, which provides access to some non-openstack managed resources, via some routing mechanisms. The various VMs on this network must not be able to see each-other, as this network may be exposed to multiple customers/projects. Note that in this context, we're using the OVN driver for our network stack. We were initially looking at external networks for this, but understood through our attempts and experimentations that: - External networks are shared to all projects by default - Default sharing of external networks can be disabled by removing the automatically injected rbac for '*', allowing us to expose the network to only the projects we specify - VMs on this network can communicate with each other - Applying FlowControl does not solve this for VMs on the same Hypervisor (as they're directly communicating, and traffic does not seem to go through the host) - Only security groups can prevent such communication, but since it's associated to a Port, we cannot enforce it - All flow control for the various networks connected to a VM are currently setup on a single bridge interface on the hypervisor First question is simple: Did we miss anything ? Or is our understanding of the mechanisms on point ? In the second case, this leads me to the proposal mentioned in the subject: Offer a Network/SecurityGroup binding mechanism that would automatically/implicitly include its rules to the port's rules. The idea is that this would allow an administrator (and project administrator?) to enforce specific rules via security groups attached to the network itself, effectively providing a category of network aimed at providing connectivity to a specific external service. Additionally, this creates a behavior where unless the administrator allows it, no two VMs on this network may be able to communicate together, as a default. What do you think about this feature ? Is there any major risk/flaw that I might be missing ? Would you, as a community, welcome such an effort ? Kind regards, -- David Pineau Engineer @Shadow IRC nick: joa
Hi, Dnia piątek, 2 sierpnia 2024 12:14:40 CEST David Pineau pisze:
Hello Neutron Team,
I'm part of a company that is currently migrating from homemade software to manage the infrastructure for a virtualization-based product towards an openstack based infrastructure.
As such, this platform includes both private and public cloud usages, which brings me to our current network problem.
Essentially, as administrators of the system, we want to expose a pre-configured network, which provides access to some non-openstack managed resources, via some routing mechanisms. The various VMs on this network must not be able to see each-other, as this network may be exposed to multiple customers/projects.
Note that in this context, we're using the OVN driver for our network stack.
We were initially looking at external networks for this, but understood through our attempts and experimentations that: - External networks are shared to all projects by default - Default sharing of external networks can be disabled by removing the automatically injected rbac for '*', allowing us to expose the network to only the projects we specify - VMs on this network can communicate with each other - Applying FlowControl does not solve this for VMs on the same Hypervisor (as they're directly communicating, and traffic does not seem to go through the host) - Only security groups can prevent such communication, but since it's associated to a Port, we cannot enforce it - All flow control for the various networks connected to a VM are currently setup on a single bridge interface on the hypervisor
First question is simple: Did we miss anything ? Or is our understanding of the mechanisms on point ?
In the second case, this leads me to the proposal mentioned in the subject: Offer a Network/SecurityGroup binding mechanism that would automatically/implicitly include its rules to the port's rules.
The idea is that this would allow an administrator (and project administrator?) to enforce specific rules via security groups attached to the network itself, effectively providing a category of network aimed at providing connectivity to a specific external service. Additionally, this creates a behavior where unless the administrator allows it, no two VMs on this network may be able to communicate together, as a default.
The way how security groups currently works is that You can specify what kind of traffic is allowed. You can't specify explicilty what is forbidden so even if you would have such additional security group attached to the network and through that effectively to all ports in that network, user would be able to add his own security group to the port and allow traffic which Your "network SG" did not allow.
What do you think about this feature ? Is there any major risk/flaw that I might be missing ? Would you, as a community, welcome such an effort ?
I'm not saying I am against such idea but I think this will require very detailed spec with consideration of various corner cases and may be really complicated to do with how currently Neutron has implemented SG in general. I think You should propose RFE: https://docs.openstack.org/neutron/latest/contributor/policies/blueprints.ht... and it will be then discussed in one of the neutron drivers team meetings. That should be first step for You.
Kind regards,
-- David Pineau Engineer @Shadow IRC nick: joa
-- Slawek Kaplonski Principal Software Engineer Red Hat
On Fri, Aug 2, 2024 at 12:54 PM Sławek Kapłoński <skaplons@redhat.com> wrote:
Dnia piątek, 2 sierpnia 2024 12:14:40 CEST David Pineau pisze:
In the second case, this leads me to the proposal mentioned in the subject:
Offer a Network/SecurityGroup binding mechanism that would
automatically/implicitly include its rules to the port's rules.
The idea is that this would allow an administrator (and project
administrator?)
to enforce specific rules via security groups attached to the network
itself,
effectively providing a category of network aimed at providing connectivity
to a specific external service.
Additionally, this creates a behavior where unless the administrator allows
it,
no two VMs on this network may be able to communicate together, as a
default.
The way how security groups currently works is that You can specify what kind of traffic is allowed. You can't specify explicilty what is forbidden so even if you would have such additional security group attached to the network and through that effectively to all ports in that network, user would be able to add his own security group to the port and allow traffic which Your "network SG" did not allow.
Whelp, in our experiments, and trying to formulate a solution, we fully overlooked that, thanks for pointing it out.
What do you think about this feature ?
Is there any major risk/flaw that I might be missing ?
Would you, as a community, welcome such an effort ?
I'm not saying I am against such idea but I think this will require very detailed spec with consideration of various corner cases and may be really complicated to do with how currently Neutron has implemented SG in general.
I think You should propose RFE: https://docs.openstack.org/neutron/latest/contributor/policies/blueprints.ht... and it will be then discussed in one of the neutron drivers team meetings. That should be first step for You.
I wasn't sure I should do this without reaching out on the ML first. Thanks for pointing out the flaws of our current proposal. I'll try to revise it for something slightly more fitting to the current neutron design and implementations, then I'll open the RFE to further the discussions.
On 2024-08-02 12:14:40 +0200 (+0200), David Pineau wrote: [...]
The various VMs on this network must not be able to see each-other, as this network may be exposed to multiple customers/projects.
Note that in this context, we're using the OVN driver for our network stack.
We were initially looking at external networks for this, but understood through our attempts and experimentations that: [...] - VMs on this network can communicate with each other - Applying FlowControl does not solve this for VMs on the same Hypervisor (as they're directly communicating, and traffic does not seem to go through the host) - Only security groups can prevent such communication, but since it's associated to a Port, we cannot enforce it [...]
As far as what's possible through Neutron, I'm not sure, but it sounds like a classic case for port isolation (i.e. "private VLAN" or what Cisco calls "protected ports"). Since Linux 4.18, the bridge module can set isolated mode on individual ports, like: bridge link set dev tap0 isolated on Setting all the guest switchports for this bridge to isolated will prevent any crosstalk at layer 2, since they'll only be able to communicate with your non-isolated ("promiscuous") switchport out to the routed external network and/or where your dhcpd resides. Port isolation is a fairly blunt instrument though, only applicable in certain situations like what you've described, and often misused confusing folks who don't understand its implications, so supporting that concept from Neutron's perspective might not be a great idea regardless. I also don't know whether this works with OVN, or if it has a similar feature of its own. -- Jeremy Stanley
participants (3)
-
David Pineau
-
Jeremy Stanley
-
Sławek Kapłoński