<div dir="ltr"><div>Hello Team,</div><div><br></div><div>Any help on this issue?<br></div><div><br></div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Thanks & Regards,</div>
<div>Vikrant Aggarwal</div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Nov 30, 2018 at 9:13 AM Vikrant Aggarwal <<a href="mailto:ervikrant06@gmail.com">ervikrant06@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hi Feilong,</div><div><br></div><div>Thanks for your reply.</div><div><br></div><div>Kindly find the below outputs. <br></div><div><br></div><div>[root@packstack1 ~]# rpm -qa | grep -i magnum<br>python-magnum-7.0.1-1.el7.noarch<br>openstack-magnum-conductor-7.0.1-1.el7.noarch<br>openstack-magnum-ui-5.0.1-1.el7.noarch<br>openstack-magnum-api-7.0.1-1.el7.noarch<br>puppet-magnum-13.3.1-1.el7.noarch<br>python2-magnumclient-2.10.0-1.el7.noarch<br>openstack-magnum-common-7.0.1-1.el7.noarch</div><div><br></div><div>[root@packstack1 ~]# rpm -qa | grep -i heat<br>openstack-heat-ui-1.4.0-1.el7.noarch<br>openstack-heat-api-cfn-11.0.0-1.el7.noarch<br>openstack-heat-engine-11.0.0-1.el7.noarch<br>puppet-heat-13.3.1-1.el7.noarch<br>python2-heatclient-1.16.1-1.el7.noarch<br>openstack-heat-api-11.0.0-1.el7.noarch<br>openstack-heat-common-11.0.0-1.el7.noarch<br></div><div><br></div><div><div><div dir="ltr" class="m_8788807413528864920gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Thanks & Regards,</div>
<div>Vikrant Aggarwal</div><br></div></div></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Nov 30, 2018 at 2:44 AM Feilong Wang <<a href="mailto:feilong@catalyst.net.nz" target="_blank">feilong@catalyst.net.nz</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>Hi Vikrant,</p>
<p>Before we dig more, it would be nice if you can let us know the
version of your Magnum and Heat. Cheers.<br>
</p>
<p><br>
</p>
<div class="m_8788807413528864920m_-2174799946800401098moz-cite-prefix">On 30/11/18 12:12 AM, Vikrant Aggarwal
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Hello Team,</div>
<div><br>
</div>
<div>Trying to deploy on K8 on fedora atomic. <br>
</div>
<div><br>
</div>
<div>Here is the output of cluster template: <br>
</div>
<div>~~~<br>
</div>
<div>[root@packstack1
k8s_fedora_atomic_v1(keystone_admin)]# magnum
cluster-template-show
16eb91f7-18fe-4ce3-98db-c732603f2e57<br>
WARNING: The magnum client is deprecated and will
be removed in a future release.<br>
Use the OpenStack client to avoid seeing this
message.<br>
+-----------------------+--------------------------------------+<br>
| Property |
Value |<br>
+-----------------------+--------------------------------------+<br>
| insecure_registry |
- |<br>
| labels |
{} |<br>
| updated_at |
- |<br>
| floating_ip_enabled |
True |<br>
| fixed_subnet |
- |<br>
| master_flavor_id |
- |<br>
| user_id |
203617849df9490084dde1897b28eb53 |<br>
| uuid |
16eb91f7-18fe-4ce3-98db-c732603f2e57 |<br>
| no_proxy |
- |<br>
| https_proxy |
- |<br>
| tls_disabled |
False |<br>
| keypair_id |
kubernetes |<br>
| project_id |
45a6706c831c42d5bf2da928573382b1 |<br>
| public |
False |<br>
| http_proxy |
- |<br>
| docker_volume_size |
10 |<br>
| server_type |
vm |<br>
| external_network_id |
external1 |<br>
| cluster_distro |
fedora-atomic |<br>
| image_id |
f5954340-f042-4de3-819e-a3b359591770 |<br>
| volume_driver |
- |<br>
| registry_enabled |
False |<br>
| docker_storage_driver |
devicemapper |<br>
| apiserver_port |
- |<br>
| name |
coe-k8s-template |<br>
| created_at |
2018-11-28T12:58:21+00:00 |<br>
| network_driver |
flannel |<br>
| fixed_network |
- |<br>
| coe |
kubernetes |<br>
| flavor_id |
m1.small |<br>
| master_lb_enabled |
False |<br>
| dns_nameserver |
8.8.8.8 |<br>
+-----------------------+--------------------------------------+</div>
<div>~~~<br>
</div>
<div>Found couple of issues in the logs of VM
started by magnum. <br>
</div>
<div><br>
</div>
<div>- etcd was not getting started because of
incorrect permission on file
"/etc/etcd/certs/server.key". This file is owned
by root by default have 0440 as permission.
Changed the permission to 0444 so that etcd can
read the file. After that etcd started
successfully.</div>
<div><br>
</div>
<div>- etcd DB doesn't contain anything:</div>
<div><br>
</div>
<div>[root@kube-cluster1-qobaagdob75g-master-0 ~]#
etcdctl ls / -r<br>
[root@kube-cluster1-qobaagdob75g-master-0 ~]#</div>
<div><br>
</div>
<div>- Flanneld is stuck in activating status. <br>
</div>
<div>~~~<br>
</div>
<div>[root@kube-cluster1-qobaagdob75g-master-0 ~]#
systemctl status flanneld<br>
● flanneld.service - Flanneld overlay address etcd
agent<br>
Loaded: loaded
(/usr/lib/systemd/system/flanneld.service;
enabled; vendor preset: disabled)<br>
Active: activating (start) since Thu 2018-11-29
11:05:39 UTC; 14s ago<br>
Main PID: 6491 (flanneld)<br>
Tasks: 6 (limit: 4915)<br>
Memory: 4.7M<br>
CPU: 53ms<br>
CGroup: /system.slice/flanneld.service<br>
└─6491 /usr/bin/flanneld
-etcd-endpoints=<a href="http://127.0.0.1:2379" target="_blank">http://127.0.0.1:2379</a>
-etcd-prefix=/<a href="http://atomic.io/network" target="_blank">atomic.io/network</a><br>
<br>
Nov 29 11:05:44
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:44.569376 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:45
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:45.584532 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:46
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:46.646255 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:47
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:47.673062 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:48
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:48.686919 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:49
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:49.709136 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:50
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:50.729548 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:51
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:51.749425 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:52
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:52.776612 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]<br>
Nov 29 11:05:53
kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:53.790418 6491
network.go:102] failed to retrieve network config:
100: Key not found (/<a href="http://atomic.io" target="_blank">atomic.io</a>)
[3]</div>
<div>~~~</div>
<div><br>
</div>
<div>- Continuously in the jouralctl logs following
messages are printed.</div>
<div><br>
</div>
<div>~~~</div>
<div>Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
kube-apiserver[6888]: F1129 11:06:39.338416
6888 server.go:269] Invalid Authorization Config:
Unknown authorization mode Node specified<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
systemd[1]: kube-apiserver.service: Main process
exited, code=exited, status=255/n/a<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
kube-scheduler[2540]: E1129 11:06:39.408272
2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463</a>:
Failed to list *api.Node: Get <a href="http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0" target="_blank">http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0</a>:
dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>:
getsockopt: connection refused<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
kube-scheduler[2540]: E1129 11:06:39.444737
2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460</a>:
Failed to list *api.Pod: Get <a href="http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0" target="_blank">http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0</a>:
dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>:
getsockopt: connection refused<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
kube-scheduler[2540]: E1129 11:06:39.445793
2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466</a>:
Failed to list *api.PersistentVolume: Get <a href="http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0" target="_blank">http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0</a>:
dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>:
getsockopt: connection refused<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
audit[1]: SERVICE_START pid=1 uid=0
auid=4294967295 ses=4294967295
subj=system_u:system_r:init_t:s0
msg='unit=kube-apiserver comm="systemd"
exe="/usr/lib/systemd/systemd" hostname=? addr=?
terminal=? res=failed'<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
systemd[1]: Failed to start Kubernetes API Server.<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
systemd[1]: kube-apiserver.service: Unit entered
failed state.<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
systemd[1]: kube-apiserver.service: Failed with
result 'exit-code'.<br>
Nov 29 11:06:39
kube-cluster1-qobaagdob75g-master-0.novalocal
kube-scheduler[2540]: E1129 11:06:39.611699
2540 reflector.go:199] <a href="http://k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481" target="_blank">k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481</a>:
Failed to list *extensions.ReplicaSet: Get <a href="http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0" target="_blank">http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0</a>:
dial tcp <a href="http://127.0.0.1:8080" target="_blank">127.0.0.1:8080</a>:
getsockopt: connection refused</div>
<div>~~~</div>
<div><br>
</div>
<div>Any help on above issue is highly appreciated.
<br>
</div>
<div><br>
</div>
<div>
<div dir="ltr" class="m_8788807413528864920m_-2174799946800401098gmail-m_-4114721042257125892gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>Thanks & Regards,</div>
<div>Vikrant Aggarwal</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset class="m_8788807413528864920m_-2174799946800401098mimeAttachmentHeader"></fieldset>
<pre class="m_8788807413528864920m_-2174799946800401098moz-quote-pre">__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: <a class="m_8788807413528864920m_-2174799946800401098moz-txt-link-abbreviated" href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a>
<a class="m_8788807413528864920m_-2174799946800401098moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a></pre>
</blockquote>
<pre class="m_8788807413528864920m_-2174799946800401098moz-signature" cols="72">--
Cheers & Best regards,
Feilong Wang (王飞龙)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: <a class="m_8788807413528864920m_-2174799946800401098moz-txt-link-abbreviated" href="mailto:flwang@catalyst.net.nz" target="_blank">flwang@catalyst.net.nz</a>
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-------------------------------------------------------------------------- </pre>
</div>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a></blockquote></div>
</blockquote></div>