答复: 答复: [kuryr][kuryr-kubernetes] does kuryr-kubernetes support dynamic subnet by pod namespace or annotation?
Michał Dulko
mdulko at redhat.com
Tue Oct 29 08:11:29 UTC 2019
See answers inline.
On Tue, 2019-10-29 at 04:08 +0000, Yi Yang (杨燚)-云服务集团 wrote:
> Hi, Michal
>
> I tried it, but it can't work, it is also so even for the network
> kuryr created by namespace driver, here is some information:
>
> I created namespace by "kubectl create namespace kuryrns1"
The correct order to have "predefined" subnets would be to start with
KuryrNet creation. But okay.
> yangyi at cmp001:~$ kubectl get ns
> NAME STATUS AGE
> default Active 48d
> kube-node-lease Active 48d
> kube-public Active 48d
> kube-system Active 48d
> kuryrns1 Active 52m
>
> My kuryr conf is below:
>
> yangyi at cmp001:~$ grep "^[^#]" /etc/kuryr/kuryr.conf
> [DEFAULT]
> bindir = /home/yangyi/kuryr-k8s-controller/env/libexec/kuryr
> deployment_type = baremetal
> log_file = /var/log/kuryr.log
> [binding]
> [cache_defaults]
> [cni_daemon]
> [cni_health_server]
> [health_server]
> [ingress]
> [kubernetes]
> api_root = https://10.110.21.64:6443
> ssl_client_crt_file = /etc/kubernetes/pki/kuryr.crt
> ssl_client_key_file = /etc/kubernetes/pki/kuryr.key
> ssl_ca_crt_file = /etc/kubernetes/pki/ca.crt
> pod_subnets_driver = namespace
> enabled_handlers = vif,namespace,kuryrnet
> [kuryr-kubernetes]
> [namespace_handler_caching]
> [namespace_sg]
> [namespace_subnet]
> pod_router = 46fc6730-a7f9-45f7-b98b-f682c436e85c
> pod_subnet_pool = 581daf0e-e661-4fb8-b8d6-b7b11d0b43ab
> [neutron]
> auth_url = http://10.110.28.20:35357/v3
> auth_type = password
> password = HAOQNs07Ci9c0DvB
> project_domain_id = default
> project_name = admin
> region_name = SDNRegion
> tenant_name = admin
> user_domain_id = default
> username = admin
> [neutron_defaults]
> project = 852d281e70b34b5398c1c5534124952e
> pod_subnet = b1fa2198-2ecd-41ce-bd06-93ddb2742586
> pod_security_groups = d89787f5-b892-487f-b682-88742007f49f
> ovs_bridge = br-int
> service_subnet = 58b322fd-19e4-47db-b2fe-5cffd528af05
> network_device_mtu = 1450
> [node_driver_caching]
> [np_handler_caching]
> [octavia_defaults]
> [pod_ip_caching]
> [pod_vif_nested]
> [pool_manager]
> [sriov]
> [subnet_caching]
> [vif_handler_caching]
> [vif_pool]
> yangyi at cmp001:~$
>
> KuryrNet has been created automatically by namespace creation:
>
> yangyi at cmp001:~$ kubectl get KuryrNet/ns-kuryrns1 -o yaml
> apiVersion: openstack.org/v1
> kind: KuryrNet
> metadata:
> annotations:
> namespaceName: kuryrns1
> creationTimestamp: "2019-10-29T02:58:01Z"
> generation: 2
> name: ns-kuryrns1
> resourceVersion: "5926221"
> selfLink: /apis/openstack.org/v1/kuryrnets/ns-kuryrns1
> uid: df5850a5-dc57-4243-b01e-be1c24d788fc
> spec:
> netId: 2dcc6969-7923-460e-8ede-17985cdf2b80
> populated: true
> routerId: 46fc6730-a7f9-45f7-b98b-f682c436e85c
> subnetCIDR: 10.254.0.0/24
> subnetId: a46861d3-eccf-4573-8c22-5412cc9d64f0
> yangyi at cmp001:~$
>
> But when I created deployment under kuryrns1 namespace, it never
> succeeded. I found CNI daemon is broken.
>
> It is so before kubectl apply -f deploy.yaml.
>
> yangyi at cmp004:~$ sudo ps aux | grep kuryr
> root 15339 0.0 0.0 51420 3852 pts/9 S 03:35 0:00 sudo
> -E kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
> root 15340 1.2 0.0 271028 101408 ? Ssl 03:35 0:01
> kuryr-daemon: master process [/home/yangyi/kuryr-k8s-
> cni/env/bin/kuryr-daemon --config-file /etc/kuryr kuryr.conf -d]
> root 15352 0.0 0.0 268948 92016 ? S 03:35 0:00
> kuryr-daemon: master process [/home/yangyi/kuryr-k8s-
> cni/env/bin/kuryr-daemon --config-file /etc/kuryr kuryr.conf -d]
> root 15357 0.0 0.0 426944 94624 ? Sl 03:35 0:00
> kuryr-daemon: watcher worker(0)
> root 15362 0.0 0.0 353220 93084 ? Sl 03:35 0:00
> kuryr-daemon: server worker(0)
> root 15366 0.0 0.0 353212 92260 ? Sl 03:35 0:00
> kuryr-daemon: health worker(0)
>
> It is so after kubectl apply -f deploy.yaml.
>
> yangyi at cmp004:~$ sudo ps aux | grep kuryr
> root 15339 0.0 0.0 51420 3852 pts/9 S 03:35 0:00 sudo
> -E kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
> root 15340 0.2 0.0 271028 101408 ? Ssl 03:35 0:01
> kuryr-daemon: master process [/home/yangyi/kuryr-k8s-
> cni/env/bin/kuryr-daemon --config-file /etc/kuryr kuryr.conf -d]
> root 15352 0.0 0.0 342680 92016 ? Sl 03:35 0:00
> kuryr-daemon: master process [/home/yangyi/kuryr-k8s-
> cni/env/bin/kuryr-daemon --config-file /etc/kuryr kuryr.conf -d]
> root 15357 0.0 0.0 427200 95028 ? Sl 03:35 0:00
> kuryr-daemon: watcher worker(0)
> root 15362 0.0 0.0 353220 93108 ? Sl 03:35 0:00
> kuryr-daemon: server worker(0)
> root 15366 0.0 0.0 353212 92260 ? Sl 03:35 0:00
> kuryr-daemon: health worker(0)
> root 16426 0.1 0.0 0 0 ? Z 03:39 0:00
> [kuryr-daemon: s] <defunct>
> root 16729 0.0 0.0 428232 94988 ? S 03:40 0:00
> kuryr-daemon: server worker(0)
> root 16813 0.0 0.0 429768 97480 ? S 03:40 0:00
> kuryr-daemon: server worker(0)
> yangyi 17700 0.0 0.0 12944 1012 pts/0 R+ 03:42 0:00 grep
> --color=auto kuryr
Defunct processes are just zombie processes. I see that as well in our
setups, it's probably a bug in pyroute2. It does not seem to affect
Kuryr though.
> I can see port is indeed created.
>
> yangyi at cmp001:~$ openstack port list --network ns/kuryrns1-net
> +--------------------------------------+------+-------------------+
> -------------------------------------------------------------------
> --------+--------+
> > ID | Name | MAC Address |
> > Fixed IP Addresses
> > | Status |
> +--------------------------------------+------+-------------------+
> -------------------------------------------------------------------
> --------+--------+
> > 2dd5f11f-5fc6-45ee-8f9b-8037019572cd | | fa:16:3e:af:4e:f1 |
> > ip_address='10.254.0.3', subnet_id='a46861d3-eccf-4573-8c22-
> > 5412cc9d64f0' | ACTIVE |
> > c7ff9e5c-1110-4dfa-983d-2f04bf7d2794 | | fa:16:3e:10:64:c7 |
> > ip_address='10.254.0.1', subnet_id='a46861d3-eccf-4573-8c22-
> > 5412cc9d64f0' | ACTIVE |
> +--------------------------------------+------+-------------------+
> -------------------------------------------------------------------
> --------+--------+
> yangyi at cmp001:~$
>
> kuryr log indicated cni is defunct and is restarted.
>
> yangyi at cmp004:~$ grep is_alive /var/log/kuryr.log
> 2019-10-29 03:40:00.492 16426 DEBUG
> kuryr_kubernetes.cni.binding.bridge [-] Reporting Driver not healthy.
> is_alive /home/yangyi/kuryr-k8s-cni/kuryr-
> kubernetes/kuryr_kubernetes/cni/binding/bridge.py:119
> yangyi at cmp004:~$
>
> Can you give me some advice or hints about how I can troubleshoot
> such an issue?
You'd need longer log, above that message you should see some logs
explaining the culprit.
> -----邮件原件-----
> 发件人: Michał Dulko [mailto:mdulko at redhat.com]
> 发送时间: 2019年10月22日 23:29
> 收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>; ltomasbo at redhat.com
> 抄送: openstack-discuss at lists.openstack.org
> 主题: Re: 答复: [kuryr][kuryr-kubernetes] does kuryr-kubernetes support
> dynamic subnet by pod namespace or annotation?
>
> Oh, I actually should have thought about it. So if you'll precreate
> the network, subnet and a KuryrNet Custom Resource [1] it should
> actually work. The definition of KuryrNet can be find here [2],
> fields are pretty self-explanatory. Please note that you also need to
> link KuryrNet to the namespace by adding an annotation to the
> namespace:
>
> "openstack.org/kuryr-net-crd": "ns-<namespace-name>"
>
> Also, just for safety, make sure the KuryrNet itself is named "ns-
> <namespace-name>" - I'm not sure if some code isn't looking it up by
> name.
>
> Please note that this was never tested, so maybe there's something I
> don't see that might prevent it from working.
>
> [1]
> https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
> [2]
> https://github.com/openstack/kuryr-kubernetes/blob/a85a7bc8b1761eb748ccf16430fe77587bc764c2/kubernetes_crds/kuryrnet.yaml
More information about the openstack-discuss
mailing list