<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Hello Team,</div><div><br></div><div>Trying to deploy on K8 on fedora atomic. <br></div><div><br></div><div>Here is the output of cluster template: <br></div><div>~~~<br></div><div>[root@packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57<br>WARNING: The magnum client is deprecated and will be removed in a future release.<br>Use the OpenStack client to avoid seeing this message.<br>+-----------------------+--------------------------------------+<br>| Property              | Value                                |<br>+-----------------------+--------------------------------------+<br>| insecure_registry     | -                                    |<br>| labels                | {}                                   |<br>| updated_at            | -                                    |<br>| floating_ip_enabled   | True                                 |<br>| fixed_subnet          | -                                    |<br>| master_flavor_id      | -                                    |<br>| user_id               | 203617849df9490084dde1897b28eb53     |<br>| uuid                  | 16eb91f7-18fe-4ce3-98db-c732603f2e57 |<br>| no_proxy              | -                                    |<br>| https_proxy           | -                                    |<br>| tls_disabled          | False                                |<br>| keypair_id            | kubernetes                           |<br>| project_id            | 45a6706c831c42d5bf2da928573382b1     |<br>| public                | False                                |<br>| http_proxy            | -                                    |<br>| docker_volume_size    | 10                                   |<br>| server_type           | vm                                   |<br>| external_network_id   | external1                            |<br>| cluster_distro        | fedora-atomic                        |<br>| image_id              | f5954340-f042-4de3-819e-a3b359591770 |<br>| volume_driver         | -                                    |<br>| registry_enabled      | False                                |<br>| docker_storage_driver | devicemapper                         |<br>| apiserver_port        | -                                    |<br>| name                  | coe-k8s-template                     |<br>| created_at            | 2018-11-28T12:58:21+00:00            |<br>| network_driver        | flannel                              |<br>| fixed_network         | -                                    |<br>| coe                   | kubernetes                           |<br>| flavor_id             | m1.small                             |<br>| master_lb_enabled     | False                                |<br>| dns_nameserver        | 8.8.8.8                              |<br>+-----------------------+--------------------------------------+</div><div>~~~<br></div><div>Found couple of issues in the logs of VM started by magnum. <br></div><div><br></div><div>-
 etcd was not getting started because of incorrect permission on file 
"/etc/etcd/certs/server.key". This file is owned by root by default have
 0440 as permission. Changed the permission to 0444 so that etcd can 
read the file. After that etcd started successfully.</div><div><br></div><div>- etcd DB doesn't contain anything:</div><div><br></div><div>[root@kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r<br>[root@kube-cluster1-qobaagdob75g-master-0 ~]#</div><div><br></div><div>- Flanneld is stuck in activating status. <br></div><div>~~~<br></div><div>[root@kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld<br>● flanneld.service - Flanneld overlay address etcd agent<br>   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)<br>   Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago<br> Main PID: 6491 (flanneld)<br>    Tasks: 6 (limit: 4915)<br>   Memory: 4.7M<br>      CPU: 53ms<br>   CGroup: /system.slice/flanneld.service<br>           └─6491 /usr/bin/flanneld -etcd-endpoints=<a href="http://127.0.0.1:2379" target="_blank">http://127.0.0.1:2379</a> -etcd-prefix=/<a href="http://atomic.io/network" target="_blank">atomic.io/network</a><br><br>Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:44.569376    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:45.584532    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:46.646255    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:47.673062    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:48.686919    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:49.709136    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:50.729548    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:51 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:51.749425    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:52 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:52.776612    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]<br>Nov 29 11:05:53 kube-cluster1-qobaagdob75g-master-0.novalocal
 flanneld[6491]: E1129 11:05:53.790418    6491 network.go:102] failed to
 retrieve network config: 100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>) [3]</div><div>~~~</div><div><br></div><div>- Continuously in the jouralctl logs following messages are printed.</div><div><br></div><div>~~~</div><div>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal
 kube-apiserver[6888]: F1129 11:06:39.338416    6888 server.go:269] 
Invalid Authorization Config: Unknown authorization mode Node specified<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.408272    2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463</a>: Failed to list *api.Node: Get <a href="http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0" target="_blank">http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0</a>: dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>: getsockopt: connection refused<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.444737    2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460</a>: Failed to list *api.Pod: Get <a href="http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0" target="_blank">http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0</a>: dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>: getsockopt: connection refused<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.445793    2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466</a>: Failed to list *api.PersistentVolume: Get <a href="http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0" target="_blank">http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0</a>: dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>: getsockopt: connection refused<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=kube-apiserver comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: Failed to start Kubernetes API Server.<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: kube-apiserver.service: Unit entered failed state.<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.<br>Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal kube-scheduler[2540]: E1129 11:06:39.611699    2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481</a>: Failed to list *extensions.ReplicaSet: Get <a href="http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0" target="_blank">http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0</a>: dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>: getsockopt: connection refused</div><div>~~~</div><div><br></div><div>Any help on above issue is highly appreciated. <br></div><div><br></div><div><div dir="ltr" class="gmail-m_-4114721042257125892gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Thanks & Regards,</div>
<div>Vikrant Aggarwal</div></div></div></div></div></div></div></div></div></div></div></div></div>