[openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed

Feilong Wang feilong at catalyst.net.nz
Thu Nov 29 21:06:52 UTC 2018


Hi Vikrant,

Before we dig more, it would be nice if you can let us know the version
of your Magnum and Heat. Cheers.


On 30/11/18 12:12 AM, Vikrant Aggarwal wrote:
> Hello Team,
>
> Trying to deploy on K8 on fedora atomic.
>
> Here is the output of cluster template:
> ~~~
> [root at packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum
> cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57
> WARNING: The magnum client is deprecated and will be removed in a
> future release.
> Use the OpenStack client to avoid seeing this message.
> +-----------------------+--------------------------------------+
> | Property              | Value                                |
> +-----------------------+--------------------------------------+
> | insecure_registry     | -                                    |
> | labels                | {}                                   |
> | updated_at            | -                                    |
> | floating_ip_enabled   | True                                 |
> | fixed_subnet          | -                                    |
> | master_flavor_id      | -                                    |
> | user_id               | 203617849df9490084dde1897b28eb53     |
> | uuid                  | 16eb91f7-18fe-4ce3-98db-c732603f2e57 |
> | no_proxy              | -                                    |
> | https_proxy           | -                                    |
> | tls_disabled          | False                                |
> | keypair_id            | kubernetes                           |
> | project_id            | 45a6706c831c42d5bf2da928573382b1     |
> | public                | False                                |
> | http_proxy            | -                                    |
> | docker_volume_size    | 10                                   |
> | server_type           | vm                                   |
> | external_network_id   | external1                            |
> | cluster_distro        | fedora-atomic                        |
> | image_id              | f5954340-f042-4de3-819e-a3b359591770 |
> | volume_driver         | -                                    |
> | registry_enabled      | False                                |
> | docker_storage_driver | devicemapper                         |
> | apiserver_port        | -                                    |
> | name                  | coe-k8s-template                     |
> | created_at            | 2018-11-28T12:58:21+00:00            |
> | network_driver        | flannel                              |
> | fixed_network         | -                                    |
> | coe                   | kubernetes                           |
> | flavor_id             | m1.small                             |
> | master_lb_enabled     | False                                |
> | dns_nameserver        | 8.8.8.8                              |
> +-----------------------+--------------------------------------+
> ~~~
> Found couple of issues in the logs of VM started by magnum.
>
> - etcd was not getting started because of incorrect permission on file
> "/etc/etcd/certs/server.key". This file is owned by root by default
> have 0440 as permission. Changed the permission to 0444 so that etcd
> can read the file. After that etcd started successfully.
>
> - etcd DB doesn't contain anything:
>
> [root at kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r
> [root at kube-cluster1-qobaagdob75g-master-0 ~]#
>
> - Flanneld is stuck in activating status.
> ~~~
> [root at kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld
> ● flanneld.service - Flanneld overlay address etcd agent
>    Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled;
> vendor preset: disabled)
>    Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago
>  Main PID: 6491 (flanneld)
>     Tasks: 6 (limit: 4915)
>    Memory: 4.7M
>       CPU: 53ms
>    CGroup: /system.slice/flanneld.service
>            └─6491 /usr/bin/flanneld
> -etcd-endpoints=http://127.0.0.1:2379 -etcd-prefix=/atomic.io/network
> <http://atomic.io/network>
>
> Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:44.569376    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:45.584532    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:46.646255    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:47.673062    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:48.686919    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:49.709136    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:50.729548    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:51 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:51.749425    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:52 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:52.776612    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> Nov 29 11:05:53 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:53.790418    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> <http://atomic.io>) [3]
> ~~~
>
> - Continuously in the jouralctl logs following messages are printed.
>
> ~~~
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> kube-apiserver[6888]: F1129 11:06:39.338416    6888 server.go:269]
> Invalid Authorization Config: Unknown authorization mode Node specified
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> systemd[1]: kube-apiserver.service: Main process exited, code=exited,
> status=255/n/a
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> kube-scheduler[2540]: E1129 11:06:39.408272    2540 reflector.go:199]
> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463
> <http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463>:
> Failed to list *api.Node: Get
> http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp
> 127.0.0.1:8080 <http://127.0.0.1:8080>: getsockopt: connection refused
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> kube-scheduler[2540]: E1129 11:06:39.444737    2540 reflector.go:199]
> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460
> <http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460>:
> Failed to list *api.Pod: Get
> http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0:
> dial tcp 127.0.0.1:8080 <http://127.0.0.1:8080>: getsockopt:
> connection refused
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> kube-scheduler[2540]: E1129 11:06:39.445793    2540 reflector.go:199]
> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466
> <http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466>:
> Failed to list *api.PersistentVolume: Get
> http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial
> tcp 127.0.0.1:8080 <http://127.0.0.1:8080>: getsockopt: connection refused
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295
> subj=system_u:system_r:init_t:s0 msg='unit=kube-apiserver
> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=?
> terminal=? res=failed'
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> systemd[1]: Failed to start Kubernetes API Server.
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> systemd[1]: kube-apiserver.service: Unit entered failed state.
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
> kube-scheduler[2540]: E1129 11:06:39.611699    2540 reflector.go:199]
> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481
> <http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481>:
> Failed to list *extensions.ReplicaSet: Get
> http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0:
> dial tcp 127.0.0.1:8080 <http://127.0.0.1:8080>: getsockopt:
> connection refused
> ~~~
>
> Any help on above issue is highly appreciated.
>
> Thanks & Regards,
> Vikrant Aggarwal
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-------------------------------------------------------------------------- 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20181130/99e4e900/attachment.html>


More information about the OpenStack-dev mailing list