[magnum] [fcos33]
wodel youchi
wodel.youchi at gmail.com
Mon Oct 25 09:31:51 UTC 2021
Hi,
When you create your cluster you can attach an ssh key, so create your own
ssh key, push it on openstack and use it with your cluster.
you can then ssh to *core@*master-kub-vm-ip with your ssh key.
Regards.
Le lun. 25 oct. 2021 à 09:55, Yasemin DEMİRAL (BILGEM BTE) <
yasemin.demiral at tubitak.gov.tr> a écrit :
> Hi,
>
> Thank you, i can dowloanded that image, I can build kubernetes cluster
> this image but I can't connect the master node with SSH. How can I connect
> kubernetes cluster ?
>
> Regards
>
> *Yasemin DEMİRAL*
>
> Senior Researcher at TUBITAK BILGEM B3LAB
>
> Safir Cloud Scrum Master
>
> ------------------------------
> *Kimden: *"wodel youchi" <wodel.youchi at gmail.com>
> *Kime: *"Yasemin DEMİRAL, BİLGEM BTE" <yasemin.demiral at tubitak.gov.tr>
> *Kk: *"openstack-discuss" <openstack-discuss at lists.openstack.org>, "Ammad
> Syed" <syedammad83 at gmail.com>, "Vikarna Tathe" <vikarnatathe at gmail.com>
> *Gönderilenler: *25 Ekim Pazartesi 2021 0:04:50
> *Konu: *Re: [magnum] [fcos33]
>
> Hi,
>
> Try this link :
> https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64
> then search for 33.20210426.3.0
>
> Then scroll down.
>
> Regards.
>
> Le dim. 24 oct. 2021 à 17:28, Yasemin DEMİRAL (BILGEM BTE) <
> yasemin.demiral at tubitak.gov.tr> a écrit :
>
>> Hi,
>>
>> How can I dowloand fcos 33? I can't find any link for dowloanding it.
>>
>> *Yasemin DEMİRAL*
>> <http://www.tubitak.gov.tr/tr/icerik-sorumluluk-reddi>
>>
>> Senior Researcher at TUBITAK BILGEM B3LAB
>>
>> Safir Cloud Scrum Master
>>
>>
>> ------------------------------
>> *Kimden: *"Vikarna Tathe" <vikarnatathe at gmail.com>
>> *Kime: *"Ammad Syed" <syedammad83 at gmail.com>
>> *Kk: *"openstack-discuss" <openstack-discuss at lists.openstack.org>
>> *Gönderilenler: *19 Ekim Salı 2021 16:23:20
>> *Konu: *Re: Openstack magnum
>>
>> Hi Ammad,
>> Thanks!!! It worked.
>>
>> On Tue, 19 Oct 2021 at 15:00, Vikarna Tathe <vikarnatathe at gmail.com>
>> wrote:
>>
>>> Hi Ammad,
>>> Yes, fcos34. Let me try with fcos33. Thanks
>>>
>>> On Tue, 19 Oct 2021 at 14:52, Ammad Syed <syedammad83 at gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Which fcos image you are using ? It looks like you are using fcos 34.
>>>> Which is currently not supported. Use fcos 33.
>>>>
>>>> On Tue, Oct 19, 2021 at 2:16 PM Vikarna Tathe <vikarnatathe at gmail.com>
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>> I was able to login to the instance. I see that kubelet service is in
>>>>> activating state. When I checked the journalctl, found the below.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *Oct 19 05:18:34 kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]:
>>>>> Started Kubelet via Hyperkube (System Container).Oct 19 05:18:34
>>>>> kubernetes-cluster-6cdrblcpckny-master-0 bash[6521]: Error: statfs
>>>>> /sys/fs/cgroup/systemd: no such file or directoryOct 19 05:18:34
>>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service: Main
>>>>> process exited, code=exited, status=125/n/aOct 19 05:18:34
>>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service:
>>>>> Failed with result 'exit-code'.Oct 19 05:18:44
>>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: kubelet.service:
>>>>> Scheduled restart job, restart counter is at 18.Oct 19 05:18:44
>>>>> kubernetes-cluster-6cdrblcpckny-master-0 systemd[1]: Stopped Kubelet via
>>>>> Hyperkube (System Container).*
>>>>>
>>>>> Executed the below command to fix this issue.
>>>>> *mkdir -p /sys/fs/cgroup/systemd*
>>>>>
>>>>>
>>>>> Now I am getiing the below error. Has anybody seen this issue.
>>>>>
>>>>>
>>>>>
>>>>> *failed to get the kubelet's cgroup: mountpoint for cpu not found.
>>>>> Kubelet system container metrics may be missing.failed to get the container
>>>>> runtime's cgroup: failed to get container name for docker process:
>>>>> mountpoint for cpu not found. failed to run Kubelet: mountpoint for not
>>>>> found*
>>>>>
>>>>> On Mon, 18 Oct 2021 at 14:09, Vikarna Tathe <vikarnatathe at gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>> Hi Ammad,
>>>>>>> Thanks for responding.
>>>>>>>
>>>>>>> Yes the instance is getting created, but i am unable to login
>>>>>>> though i have generated the keypair. There is no default password for this
>>>>>>> image to login via console.
>>>>>>>
>>>>>>> openstack server list
>>>>>>>
>>>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+
>>>>>>> | ID | Name
>>>>>>> | Status | Networks | Image
>>>>>>> | Flavor |
>>>>>>>
>>>>>>> +--------------------------------------+--------------------------------------+---------+------------------------------------+----------------------+----------+
>>>>>>> | cf955a75-8cd2-4f91-a01f-677159b57cb2 |
>>>>>>> k8s-cluster-01-2nyejxo3hyvb-master-0 | ACTIVE | private1=10.100.0.39,
>>>>>>> 10.14.20.181 | fedora-coreos-latest | m1.large |
>>>>>>>
>>>>>>>
>>>>>>> ssh -i id_rsa core at 10.14.20.181
>>>>>>> The authenticity of host '10.14.20.181 (10.14.20.181)' can't be
>>>>>>> established.
>>>>>>> ECDSA key fingerprint is
>>>>>>> SHA256:ykEMpwf79/zTMwcELDSI0f66Sxbri56ovGJ+RRwKXDU.
>>>>>>> Are you sure you want to continue connecting (yes/no/[fingerprint])?
>>>>>>> yes
>>>>>>> Warning: Permanently added '10.14.20.181' (ECDSA) to the list of
>>>>>>> known hosts.
>>>>>>> core at 10.14.20.181: Permission denied
>>>>>>> (publickey,gssapi-keyex,gssapi-with-mic).
>>>>>>>
>>>>>>> On Mon, 18 Oct 2021 at 14:02, Ammad Syed <syedammad83 at gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>> Can you check if the master server is deployed as a nova instance ?
>>>>>>>> if yes, then login to the instance and check cloud-init and heat agent logs
>>>>>>>> to see the errors.
>>>>>>>>
>>>>>>>> Ammad
>>>>>>>>
>>>>>>>> On Mon, Oct 18, 2021 at 12:03 PM Vikarna Tathe <
>>>>>>>> vikarnatathe at gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hello All,
>>>>>>>>> I am trying to create a kubernetes cluster using magnum. Image:
>>>>>>>>> fedora-coreos.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The stack gets stucked in CREATE_IN_PROGRESS. See the output
>>>>>>>>> below.
>>>>>>>>> openstack coe cluster list
>>>>>>>>>
>>>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+
>>>>>>>>> | uuid | name | keypair
>>>>>>>>> | node_count | master_count | status | health_status |
>>>>>>>>>
>>>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+
>>>>>>>>> | 9381cd12-af60-40b1-b1c4-d3dcc93e926e | k8s-cluster-01 | ctrlkey
>>>>>>>>> | 2 | 1 | CREATE_IN_PROGRESS | None |
>>>>>>>>>
>>>>>>>>> +--------------------------------------+----------------+---------+------------+--------------+--------------------+---------------+
>>>>>>>>>
>>>>>>>>> openstack stack resource show k8s-cluster-01-2nyejxo3hyvb
>>>>>>>>> kube_masters
>>>>>>>>>
>>>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
>>>>>>>>> | Field | Value
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>>
>>>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
>>>>>>>>> | attributes | {'refs_map': None, 'removed_rsrc_list':
>>>>>>>>> [], 'attributes': None, 'refs': None}
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | creation_time | 2021-10-18T06:44:02Z
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | description |
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | links | [{'href': '
>>>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17/resources/kube_masters',
>>>>>>>>> 'rel': 'self'}, {'href': '
>>>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb/38eeba52-76ea-41b5-9358-6bf54ada8d17',
>>>>>>>>> 'rel': 'stack'}, {'href': '
>>>>>>>>> http://10.14.20.159:8004/v1/f453bf8a2afa4fbba30030738f4dd216/stacks/k8s-cluster-01-2nyejxo3hyvb-kube_masters-2mhjajuxq5ut/3da2083f-0b2c-4b9d-8df5-8468e0de3028',
>>>>>>>>> 'rel': 'nested'}] |
>>>>>>>>> | logical_resource_id | kube_masters
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | physical_resource_id | 3da2083f-0b2c-4b9d-8df5-8468e0de3028
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | required_by | ['kube_cluster_deploy',
>>>>>>>>> 'etcd_address_lb_switch', 'api_address_lb_switch', 'kube_cluster_config']
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | resource_name | kube_masters
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | resource_status | CREATE_IN_PROGRESS
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | resource_status_reason | state changed
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | resource_type | OS::Heat::ResourceGroup
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>> | updated_time | 2021-10-18T06:44:02Z
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> |
>>>>>>>>>
>>>>>>>>> +------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
>>>>>>>>>
>>>>>>>>> Vikarna
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Syed Ammad Ali
>>>>>>>>
>>>>>>> --
>>>> Regards,
>>>>
>>>> Syed Ammad Ali
>>>>
>>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20211025/5dd47605/attachment-0001.htm>
More information about the openstack-discuss
mailing list