[magnum] [fcos33]

Yasemin DEMİRAL (BILGEM BTE) yasemin.demiral at tubitak.gov.tr
Sun Oct 24 12:01:36 UTC 2021


Hi, 

How can I dowloand fcos 33? I can't find any link for dowloanding it. 



Yasemin DEMİRAL [ http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi ] 

Senior Researcher at TUBITAK BILGEM B3LAB 

Safir Cloud Scrum Master 


Kimden: "Vikarna Tathe" <vikarnatathe at gmail.com> 
Kime: "Ammad Syed" <syedammad83 at gmail.com> 
Kk: "openstack-discuss" <openstack-discuss at lists.openstack.org> 
Gönderilenler: 19 Ekim Salı 2021 16:23:20 
Konu: Re: Openstack magnum 

Hi Ammad, 
Thanks!!! It worked. 

On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: 



Hi Ammad, 
Yes, fcos34. Let me try with fcos33. Thanks 

On Tue, 19 Oct 2021 at 14:52, Ammad Syed < [ mailto:syedammad83 at gmail.com | syedammad83 at gmail.com ] > wrote: 

BQ_BEGIN

Hi, 

Which fcos image you are using ? It looks like you are using fcos 34. Which is currently not supported. Use fcos 33. 

On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: 

BQ_BEGIN

Hi All, 
I was able to login to the instance. I see that kubelet service is in activating state. When I checked the journalctl, found the below. 

Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container). 
Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs /sys/fs/cgroup/systemd: no such file or directory 
Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/a 
Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'. 
Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. 
Oct 19 05:18:44 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container). 

Executed the below command to fix this issue. 
mkdir -p /sys/fs/cgroup/systemd 


Now I am getiing the below error. Has anybody seen this issue. 

failed to get the kubelet's cgroup: mountpoint for cpu not found. Kubelet system container metrics may be missing. 
failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. 
failed to run Kubelet: mountpoint for not found 

On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: 

BQ_BEGIN


BQ_BEGIN


Hi Ammad, 
Thanks for responding. 

Yes the instance is getting created, but i am unable to login though i have generated the keypair. There is no default password for this image to login via console. 

openstack server list 
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ 
| ID | Name | Status | Networks | Image | Flavor | 
+--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+ 
| cf955a75-8cd2-4f91-a01f-677159b57cb2 | k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39, 10.14.20.181 | fedora-coreos-latest | m1.large | 


ssh -i id_rsa [ mailto:core at 10.14.20.181 | core at 10.14.20.181 ] 
The authenticity of host '10.14.20.181 (10.14.20.181)' can't be established. 
ECDSA key fingerprint is SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU. 
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes 
Warning: Permanently added '10.14.20.181' (ECDSA) to the list of known hosts. 
[ mailto:core at 10.14.20.181 | core at 10.14.20.181 ] : Permission denied (publickey,gssapi-keyex,gssapi-with-mic). 

On Mon, 18 Oct 2021 at 14:02, Ammad Syed < [ mailto:syedammad83 at gmail.com | syedammad83 at gmail.com ] > wrote: 

BQ_BEGIN

Hi, 
Can you check if the master server is deployed as a nova instance ? if yes, then login to the instance and check cloud-init and heat agent logs to see the errors. 

Ammad 

On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe < [ mailto:vikarnatathe at gmail.com | vikarnatathe at gmail.com ] > wrote: 

BQ_BEGIN

Hello All, 
I am trying to create a kubernetes cluster using magnum. Image: fedora-coreos. 


The stack gets stucked in CREATE_IN_PROGRESS. See the output below. 
openstack coe cluster list 
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ 
| uuid | name | keypair | node_count | master_count | status | health_status | 
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ 
| 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey | 2 | 1 | CREATE_IN_PROGRESS | None | 
+--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+ 

openstack stack resource show k8s-cluster-01-2nyejxo3hyvb kube_masters 
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 
| Field | Value | 
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 
| attributes | {'refs_map': None, 'removed_rsrc_list': [], 'attributes': None, 'refs': None} | 
| creation_time | 2021-10-18T06:44:02Z | 
| description | | 
| links | [{'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters ] ', 'rel': 'self'}, {'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17 | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17 ] ', 'rel': 'stack'}, {'href': ' [ http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028 | http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028 ] ', 'rel': 'nested'}] | 
| logical_resource_id | kube_masters | 
| physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028 | 
| required_by | ['kube_cluster_deploy', 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config'] | 
| resource_name | kube_masters | 
| resource_status | CREATE_IN_PROGRESS | 
| resource_status_reason | state changed | 
| resource_type | OS::Heat::ResourceGroup | 
| updated_time | 2021-10-18T06:44:02Z | 
+------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 

Vikarna 





-- 
Regards, 

Syed Ammad Ali 

BQ_END


BQ_END


BQ_END


BQ_END

-- 
Regards, 

Syed Ammad Ali 

BQ_END


BQ_END


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211024/5ddabf4b/attachment-0001.htm>


More information about the openstack-discuss mailing list