[kuryr] Using kuryr-kubernetes CNI without neutron agent(s)?

Michał Dulko mdulko at redhat.com
Mon Nov 8 09:40:02 UTC 2021


On Fri, 2021-11-05 at 22:54 +0000, Jason Anderson wrote:
> Hi Michał,
> 
> I continue to appreciate the information you are providing. I’ve been
> doing some more research into the landscape of systems and had a few
> follow-up questions. I’ve also left some clarifying remarks if you are
> interested.
> 
> I’m currently evaluating OVN, haven’t used it before and there’s a bit
> of a learning curve ;) However it seems like it may solve a good part
> of the problem by removing RabbitMQ and reducing the privileges of the
> edge host w.r.t. network config.
> 
> Now I’m looking at kuryr-kubernetes.
> 
> 1. What is the difference between kuryr and kuryr-kubernetes? I have
> used kuryr-libnetwork before, in conjunction with kuryr-server (which I
> think is provided via the main kuryr project?). I am using Kolla
> Ansible so was spared some of the details on installation. I understand
> kuryr-libnetwork is basically “kuryr for Docker” while kuryr-kubernetes
> is “kuryr for K8s”, but that leaves me confused about what exactly the
> kuryr repo is.

In openstack/kuryr we have kuryr.lib module, which is hosting a few
things shared by kuryr-libnetwork and kuryr-kubernetes. Nothing to
worry about really. ;)

> 2. A current idea is to have several edge “compute” nodes that will run
> a lightweight k8s kubelet such as k3s. OVN will provide networking to
> the edge nodes, controlled from the central site. I would then place
> kuryr-k8s-controller on the central site and kuryr-cni-daemon on all
> the edge nodes. My question is: could users create their own Neutron
> networks (w/ their own Geneve segment) and launch pods connected on
> that network, and have those pods effectively be isolated from other
> pods in the topology? As in, can k8s be told that pod A should launch
> on network A’, and pod B on network B’? Or is there an assumption that
> from Neutron’s perspective all pods are always on a single Neutron
> network?

Ha, that might be a bit tough one. Basically you can easily set Kuryr
to create separate subnets for each of the K8s namespaces, but then
you'd need to rely on NetworkPolicies to isolate traffic between
namespaces which might not exactly fit your multitenant model.

The best way to implement whatever you need might be to write your own
custom subnet driver [1] that would choose the subnet e.g. based on a
pod or namespace annotation. If there's a clear use case behind it, I
think we can include it into the upstream code too.

[1] https://opendev.org/openstack/kuryr-kubernetes/src/branch/master/kuryr_kubernetes/controller/drivers

> Cheers, and thanks!
> /Jason
> 
> > On Oct 27, 2021, at 12:03 PM, Michał Dulko <mdulko at redhat.com> wrote:
> > 
> > Hm, so a mixed OpenStack-K8s edge setup, where edge sites are
> > Kubernetes deployments? We've took a look at some edge use cases with
> > Kuryr and one problem people see is that if an edge site becomes
> > disconnected from the main side, Kuryr will not allow creation of new
> > Pods and Services as it needs connection to Neutron and Octavia APIs
> > for that. If that's not a problem had you gave a thought into running
> > distributed compute nodes [1] as edge sites and then Kubernetes on
> > top
> > of them? This architecture should be doable with Kuryr (probably with
> > minor changes).
> 
> Sort of! I work in research infrastructure and we are building an
> IoT/edge testbed for computer science researchers who wish to do
> research in edge computing. It’s a bit mad science-y. We are buying and
> configuring relatively high-powered edge devices such as Raspberry Pis
> and Jetson Nanos and making them available for experimentation at a
> variety of sites. Separately, the platform supports any owner of a
> supported device to have it managed by the testbed (i.e., they can use
> our interfaces to launch containers on it and connect it logically to
> other devices / resources in the cloud.)
> 
> Distributed compute node looks a bit too heavy for this highly dynamic
> use-case, but thank you for sharing.
> 
> Anyways, one might ask why Neutron at all. I am hopeful we can get some
> interesting properties such as network isolation and the ability to
> bridge traffic from containers across other layer 2 links such as those
> provided by AL2S.
> 
> > > OVN may help if it can remove the need for RabbitMQ, which is
> > > probably the
> > > most difficult aspect to remove from OpenStack’s
> > > dependencies/assumptions,
> > > yet also one of the most pernicious from a security angle, as an
> > > untrusted
> > > worker node can easily corrupt the control plane.
> > 
> > It's just Kuryr which needs access to the credentials, so possibly
> > you
> > should be able to isolate them, but I get the point, containers are
> > worse at isolation than VMs.
> 
> I’m less worried about the mechanism for isolation on the host and more
> the amount of privileged information the host must keep secure, and the
> impact of that information being compromised. Because our experimental
> target system involves container engines maintained externally to the
> core site, the risk of compromise on the edge is high. I am searching
> for an architecture that greatly limits the blast radius of such a
> compromise. Currently if we use standard Neutron networking + Kuryr, we
> must give RabbitMQ credentials and others to the container engines on
> the edge, which papers such
> as http://seclab.cs.sunysb.edu/seclab/pubs/asiaccs16.pdf have
> documented as a trivial escalation path.
> 
> For this reason, narrowing the scope of what state the edge hosts can
> influence on the core site is paramount.
> 
> > 
> > > Re: admin creds, maybe it is possible to carefully craft a role
> > > that only works
> > > for some Neutron operations and put that on the worker nodes. I
> > > will explore.
> > 
> > I think those settings [2] is what would require highest Neutron
> > permissions in baremetal case.
> 
> Thanks — so it will need to create and delete ports. This may be
> acceptable; without some additional API proxy layer for the edge hosts,
> a malicious edge host could create bogus ports and delete good ones,
> but that is a much smaller level of impact. I think we could create a
> role that only allowed such operations and generate per-host
> credentials.
> 
> > [1] 
> > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/distributed_compute_node.html
> > [2] 
> > https://opendev.org/openstack/kuryr-kubernetes/src/branch/master/kuryr_kubernetes/controller/drivers/neutron_vif.py#L125-L127
> > 
> > > Cheers!
> > > > [1]
> > > > https://docs.openstack.org/kuryr-kubernetes/latest/nested_vlan_mode.html
> > > > 
> > > > Thanks,
> > > > Michał
> > > > 
> > > > > Thanks!
> > > > > Jason Anderson
> > > > > 
> > > > > ---
> > > > > 
> > > > > Chameleon DevOps Lead
> > > > > Department of Computer Science, University of Chicago
> > > > > Mathematics and Computer Science, Argonne National Laboratory
> 





More information about the openstack-discuss mailing list