Re: Openstack magnum
Hi Ammad,
Thanks for responding.
Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console.
openstack server list
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large |
ssh -i id_rsa core@10.14.20.181 The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. core@10.14.20.181: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
On Mon, 18 Oct 2021 at 14:02, Ammad Syed <syedammad83@gmail.com> wrote:
Hi,
Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors.
Ammad
On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hello All,
I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos.
The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+
openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None}
| | creation_time | 2021-10-18T06:44:02Z
| | description |
| | links | [{'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'self'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'stack'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'nested'}] | | logical_resource_id | kube_masters
| | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028
| | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config']
| | resource_name | kube_masters
| | resource_status | CREATE_IN_PROGRESS
| | resource_status_reason | state changed
| | resource_type | OS::Heat::ResourceGroup
| | updated_time | 2021-10-18T06:44:02Z
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Vikarna
-- Regards,
Syed Ammad Ali
Hi All, I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below. *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/aOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container).* Executed the below command to fix this issue. *mkdir -p /sys/fs/cgroup/systemd* Now I am getiing the below error. Has anybody seen this issue. *failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing.failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. failed to run Kubelet: mountpoint for not found* On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi Ammad,
Thanks for responding.
Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console.
openstack server list
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large |
ssh -i id_rsa core@10.14.20.181 The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. core@10.14.20.181: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
On Mon, 18 Oct 2021 at 14:02, Ammad Syed <syedammad83@gmail.com> wrote:
Hi,
Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors.
Ammad
On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hello All,
I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos.
The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+
openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None}
| | creation_time | 2021-10-18T06:44:02Z
| | description |
| | links | [{'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'self'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'stack'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'nested'}] | | logical_resource_id | kube_masters
| | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028
| | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config']
| | resource_name | kube_masters
| | resource_status | CREATE_IN_PROGRESS
| | resource_status_reason | state changed
| | resource_type | OS::Heat::ResourceGroup
| | updated_time | 2021-10-18T06:44:02Z
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Vikarna
-- Regards,
Syed Ammad Ali
Hi, Which fcos image you are using ? It looks like you are using fcos 34. Which is currently not supported. Use fcos 33. On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi All,
I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below.
*Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/aOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container).*
Executed the below command to fix this issue. *mkdir -p /sys/fs/cgroup/systemd*
Now I am getiing the below error. Has anybody seen this issue.
*failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing.failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. failed to run Kubelet: mountpoint for not found*
On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi Ammad,
Thanks for responding.
Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console.
openstack server list
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large |
ssh -i id_rsa core@10.14.20.181 The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. core@10.14.20.181: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
On Mon, 18 Oct 2021 at 14:02, Ammad Syed <syedammad83@gmail.com> wrote:
Hi,
Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors.
Ammad
On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hello All,
I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos.
The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+
openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None}
| | creation_time | 2021-10-18T06:44:02Z
| | description |
| | links | [{'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'self'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'stack'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'nested'}] | | logical_resource_id | kube_masters
| | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028
| | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config']
| | resource_name | kube_masters
| | resource_status | CREATE_IN_PROGRESS
| | resource_status_reason | state changed
| | resource_type | OS::Heat::ResourceGroup
| | updated_time | 2021-10-18T06:44:02Z
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Vikarna
-- Regards,
Syed Ammad Ali
--
Regards,
Syed Ammad Ali
Hi Ammad, Yes, fcos34. Let me try with fcos33. Thanks On Tue, 19 Oct 2021 at 14:52, Ammad Syed <syedammad83@gmail.com> wrote:
Hi,
Which fcos image you are using ? It looks like you are using fcos 34. Which is currently not supported. Use fcos 33.
On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi All,
I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below.
*Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/aOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container).*
Executed the below command to fix this issue. *mkdir -p /sys/fs/cgroup/systemd*
Now I am getiing the below error. Has anybody seen this issue.
*failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing.failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. failed to run Kubelet: mountpoint for not found*
On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi Ammad,
Thanks for responding.
Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console.
openstack server list
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large |
ssh -i id_rsa core@10.14.20.181 The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. core@10.14.20.181: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
On Mon, 18 Oct 2021 at 14:02, Ammad Syed <syedammad83@gmail.com> wrote:
Hi,
Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors.
Ammad
On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hello All,
I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos.
The stack gets stucked in CREATE_IN_PROGRESS. See the output below. openstack coe cluster list
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | uuid | name | keypair | node_count | master_count | status | health_status |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None |
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+
openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None}
| | creation_time | 2021-10-18T06:44:02Z
| | description |
| | links | [{'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'self'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'stack'}, {'href': ' http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', 'rel': 'nested'}] | | logical_resource_id | kube_masters
| | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028
| | required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config']
| | resource_name | kube_masters
| | resource_status | CREATE_IN_PROGRESS
| | resource_status_reason | state changed
| | resource_type | OS::Heat::ResourceGroup
| | updated_time | 2021-10-18T06:44:02Z
|
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Vikarna
-- Regards,
Syed Ammad Ali
--
Regards,
Syed Ammad Ali
Hi Ammad, Thanks!!! It worked. On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi Ammad,
Yes, fcos34. Let me try with fcos33. Thanks
On Tue, 19 Oct 2021 at 14:52, Ammad Syed <syedammad83@gmail.com> wrote:
Hi,
Which fcos image you are using ? It looks like you are using fcos 34. Which is currently not supported. Use fcos 33.
On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi All,
I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below.
*Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/aOct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18.Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container).*
Executed the below command to fix this issue. *mkdir -p /sys/fs/cgroup/systemd*
Now I am getiing the below error. Has anybody seen this issue.
*failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing.failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. failed to run Kubelet: mountpoint for not found*
On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe <vikarnatathe@gmail.com> wrote:
Hi Ammad,
Thanks for responding.
Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console.
openstack server list
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ | cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large |
ssh -i id_rsa core@10.14.20.181 The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. core@10.14.20.181: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
On Mon, 18 Oct 2021 at 14:02, Ammad Syed <syedammad83@gmail.com> wrote:
Hi,
Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors.
Ammad
On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < vikarnatathe@gmail.com> wrote:
> Hello All, > > I am trying to create a kubernetes cluster using magnum. Image: > fedora-coreos. > > > The stack gets stucked in CREATE_IN_PROGRESS. See the output below. > openstack coe cluster list > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > | uuid | name | keypair | > node_count | master_count | status | health_status | > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | > 2 | 1 | CREATE_IN_PROGRESS | None | > > +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ > > openstack stack resource show k8s-cluster-01-2nyejxo3hyvb > kube_masters > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | Field | Value > > > > > > > | > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > | attributes | {'refs_map': None, 'removed_rsrc_list': > [], 'attributes': None, 'refs': None} > > > > > > | > | creation_time | 2021-10-18T06:44:02Z > > > > > > > | > | description | > > > > > > > | > | links | [{'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', > 'rel': 'self'}, {'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', > 'rel': 'stack'}, {'href': ' > http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-clus...', > 'rel': 'nested'}] | > | logical_resource_id | kube_masters > > > > > > > | > | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 > > > > > > > | > | required_by | ['kube_cluster_deploy', > 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] > > > > > > | > | resource_name | kube_masters > > > > > > > | > | resource_status | CREATE_IN_PROGRESS > > > > > > > | > | resource_status_reason | state changed > > > > > > > | > | resource_type | OS::Heat::ResourceGroup > > > > > > > | > | updated_time | 2021-10-18T06:44:02Z > > > > > > > | > > +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > > Vikarna >
-- Regards,
Syed Ammad Ali
--
Regards,
Syed Ammad Ali
participants (2)
-
Ammad Syed
-
Vikarna Tathe