[victoria][magnum]fedora-atomic-27 image
Luke Camilleri
luke.camilleri at zylacomputing.com
Thu Apr 8 11:06:22 UTC 2021
Hi Bharat, in fact I had noticed that property when creating the image
in OS and made some more research about this.
I now have 2 images (atomic and coreos) and have set the different flags
in the image creation process.
The documentation from Victoria to latest has also changed to this:
Victoria (Kubernetes cluster creation) - Create a cluster template for a
Kubernetes cluster using the |fedora-atomic-latest| image
latest - Create a cluster template for a Kubernetes cluster using the
|fedora-coreos-latest| image
So in the end it seems that the CoreOS image is now being suggested for
the Kubernetes cluster creation. The bootstrapping process seems to be
handled by ignition which handles the ssh keys (I need to find out in
more detail how the ignition mechanism works to better understand this
process)
Thanks
On 08/04/2021 08:19, Bharat Kunwar wrote:
> As in, do you have that label set in the image property?
>
> Sent from my iPhone
>
>> On 8 Apr 2021, at 07:05, Bharat Kunwar <bharat at stackhpc.com> wrote:
>>
>> Is your os_distro=fedora-coreos or fedora-atomic?
>>
>> Sent from my iPhone
>>
>>> On 7 Apr 2021, at 22:12, Luke Camilleri
>>> <luke.camilleri at zylacomputing.com> wrote:
>>>
>>>
>>>
>>> Hi Bharat, I am on Victoria so that should satisfy the requirement:
>>>
>>> # rpm -qa | grep -i heat
>>> openstack-heat-api-cfn-15.0.0-1.el8.noarch
>>> openstack-heat-api-15.0.0-1.el8.noarch
>>> python3-heatclient-2.2.1-2.el8.noarch
>>> openstack-heat-common-15.0.0-1.el8.noarch
>>> openstack-heat-engine-15.0.0-1.el8.noarch
>>> openstack-heat-ui-4.0.0-1.el8.noarch
>>>
>>> So from what I can see during the stack's step at
>>> OS::Heat::SoftwareConfig is the step that gets the data right?
>>>
>>> agent_config:
>>> type: OS::Heat::SoftwareConfig
>>> properties:
>>> group: ungrouped
>>> config:
>>> list_join:
>>> - "\n"
>>> -
>>> - str_replace:
>>> template: {get_file: user_data.json}
>>> params:
>>> __HOSTNAME__: {get_param: name}
>>> __SSH_KEY_VALUE__: {get_param: ssh_public_key}
>>> __OPENSTACK_CA__: {get_param: openstack_ca}
>>> __CONTAINER_INFRA_PREFIX__:
>>>
>>>
>>> In the stack I can see that the step below which corresponds to the
>>> agent_config above and has just been initialized:
>>>
>>> kube_cluster_config
>>> <https://portal.zylacloud.com/dashboard/project/stacks/stack/84330fda-efe6-4b94-96da-b836b60e2586/kube_cluster_config/>
>>>
>>> OS::Heat::SoftwareConfig 46 minutes Init Complete
>>>
>>> My question here would be:
>>>
>>> 1- is the file the user_data
>>>
>>> 2- at which step is this data aplied to the instance as from the
>>> fedora docs (
>>> https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview
>>> ) this step seems to be at the initial stages of the boot process
>>>
>>> Thanks in advance for any assistance
>>>
>>> On 07/04/2021 22:54, Bharat Kunwar wrote:
>>>> The ssh key gets injected via ignition which is why it’s not
>>>> present in the HOT template. You need minimum train release of Heat
>>>> for this to work however.
>>>>
>>>> Sent from my iPhone
>>>>
>>>>> On 7 Apr 2021, at 21:45, Luke Camilleri
>>>>> <luke.camilleri at zylacomputing.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> Hello Ammad and thanks for your assistance. I followed the guide
>>>>> and it has all the details and steps except for one thing, the ssh
>>>>> key is not being passed over to the instance, if I deploy an
>>>>> instance from that image and pass the ssh key it works fine but if
>>>>> I use the image as part of the HOT it lists the key as "-"
>>>>>
>>>>> Did you have this issue by any chance? Never thought I would be
>>>>> asking this question as it is a basic thing but I find it very
>>>>> strange that this is not working. I tried to pass the ssh key in
>>>>> either the template or in the cluster creation command but for
>>>>> both options the Key Name metadata option for the instance remains
>>>>> "None" when the instance is deployed.
>>>>>
>>>>> I then went on and checked the yaml file the resource uses that
>>>>> loads/gets the parameters
>>>>> /usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml
>>>>> has the below yaml configurations:
>>>>>
>>>>> kube-master:
>>>>> type: OS::Nova::Server
>>>>> condition: image_based
>>>>> properties:
>>>>> name: {get_param: name}
>>>>> image: {get_param: server_image}
>>>>> flavor: {get_param: master_flavor}
>>>>> MISSING -----> key_name: {get_param: ssh_key_name}
>>>>> user_data_format: SOFTWARE_CONFIG
>>>>> software_config_transport: POLL_SERVER_HEAT
>>>>> user_data: {get_resource: agent_config}
>>>>> networks:
>>>>> - port: {get_resource: kube_master_eth0}
>>>>> scheduler_hints: { group: { get_param: nodes_server_group_id }}
>>>>> availability_zone: {get_param: availability_zone}
>>>>>
>>>>> kube-master-bfv:
>>>>> type: OS::Nova::Server
>>>>> condition: volume_based
>>>>> properties:
>>>>> name: {get_param: name}
>>>>> flavor: {get_param: master_flavor}
>>>>> MISSING -----> key_name: {get_param: ssh_key_name}
>>>>> user_data_format: SOFTWARE_CONFIG
>>>>> software_config_transport: POLL_SERVER_HEAT
>>>>> user_data: {get_resource: agent_config}
>>>>> networks:
>>>>> - port: {get_resource: kube_master_eth0}
>>>>> scheduler_hints: { group: { get_param: nodes_server_group_id }}
>>>>> availability_zone: {get_param: availability_zone}
>>>>> block_device_mapping_v2:
>>>>> - boot_index: 0
>>>>> volume_id: {get_resource: kube_node_volume}
>>>>>
>>>>> If i add the lines which show as missing, then everything works
>>>>> well and the key is actually injected in the kubemaster. Did
>>>>> anyone had this issue?
>>>>>
>>>>> On 07/04/2021 10:24, Ammad Syed wrote:
>>>>>> Hi Luke,
>>>>>>
>>>>>> You may refer to below guide for magnum installation and its
>>>>>> template
>>>>>>
>>>>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10
>>>>>> <https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10>
>>>>>>
>>>>>> It worked pretty well for me.
>>>>>>
>>>>>> - Ammad
>>>>>> On Wed, Apr 7, 2021 at 5:02 AM Luke Camilleri
>>>>>> <luke.camilleri at zylacomputing.com
>>>>>> <mailto:luke.camilleri at zylacomputing.com>> wrote:
>>>>>>
>>>>>> Thanks for your quick reply. Do you have a download link for
>>>>>> that image as I cannot find an archive for the 32 release?
>>>>>>
>>>>>> As for the image upload into openstack you still use the
>>>>>> fedora-atomic property right to be available for coe deployments?
>>>>>>
>>>>>> On 07/04/2021 00:03, feilong wrote:
>>>>>>>
>>>>>>> Hi Luke,
>>>>>>>
>>>>>>> The Fedora Atomic driver has been deprecated a while since
>>>>>>> the Fedora Atomic has been deprecated by upstream. For now,
>>>>>>> I would suggest using Fedora CoreOS 32.20201104.3.0
>>>>>>>
>>>>>>> The latest version of Fedora CoreOS is 33.xxx, but there are
>>>>>>> something when booting based my testing, see
>>>>>>> https://github.com/coreos/fedora-coreos-tracker/issues/735
>>>>>>> <https://github.com/coreos/fedora-coreos-tracker/issues/735>
>>>>>>>
>>>>>>> Please feel free to let me know if you have any question
>>>>>>> about using Magnum. We're using stable/victoria on our
>>>>>>> public cloud and it works very well. I can share our public
>>>>>>> templates if you want. Cheers.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 7/04/21 9:51 am, Luke Camilleri wrote:
>>>>>>>>
>>>>>>>> We have insatlled magnum following the installation guide
>>>>>>>> here
>>>>>>>> https://docs.openstack.org/magnum/victoria/install/install-rdo.html
>>>>>>>> <https://docs.openstack.org/magnum/victoria/install/install-rdo.html>
>>>>>>>> and the process was quite smooth but we have been having
>>>>>>>> some issues with the deployment of the clusters.
>>>>>>>>
>>>>>>>> The image being used as per the documentation is
>>>>>>>> https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64
>>>>>>>> <https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64>
>>>>>>>>
>>>>>>>> Our first issue was that podman was being used even if we
>>>>>>>> specified the use_podman=false (since the image above did
>>>>>>>> not include podman) but this was resulting in a timeout and
>>>>>>>> the cluster would fail to deploy. We have then installed
>>>>>>>> podman in the image and the cluster progressed a bit further
>>>>>>>>
>>>>>>>> /+ echo 'WARNING Attempt 60: Trying to install kubectl.
>>>>>>>> Sleeping 5s'//
>>>>>>>> //+ sleep 5s//
>>>>>>>> //+ ssh -F /srv/magnum/.ssh/config root at localhost
>>>>>>>> '/usr/bin/podman run --entrypoint /bin/bash --name
>>>>>>>> install-kubectl --net host --privileged --rm
>>>>>>>> --user root --volume /srv/magnum/bin:/host/srv/magnum/bin
>>>>>>>> k8s.gcr.io/hyperkube:v1.15.7
>>>>>>>> <http://k8s.gcr.io/hyperkube:v1.15.7> -c '\''cp
>>>>>>>> /usr/local/bin/kubectl /host/srv/magnum/bin/kubectl'\'''//
>>>>>>>> //bash: /usr/bin/podman: No such file or directory//
>>>>>>>> //ERROR Unable to install kubectl. Abort.//
>>>>>>>> //+ i=61//
>>>>>>>> //+ '[' 61 -gt 60 ']'//
>>>>>>>> //+ echo 'ERROR Unable to install kubectl. Abort.'//
>>>>>>>> //+ exit 1/
>>>>>>>>
>>>>>>>> The cluster is now failing here at "kube_cluster_deploy"
>>>>>>>> and when checking the logs on the master node we noticed
>>>>>>>> the following in the log files:
>>>>>>>>
>>>>>>>> /Starting to run kube-apiserver-to-kubelet-role//
>>>>>>>> //Waiting for Kubernetes API...//
>>>>>>>> //+ echo 'Waiting for Kubernetes API...'//
>>>>>>>> //++ curl --silent http://127.0.0.1:8080/healthz
>>>>>>>> <http://127.0.0.1:8080/healthz>//
>>>>>>>> //+ '[' ok = '' ']'//
>>>>>>>> //+ sleep 5/
>>>>>>>>
>>>>>>>> This is because the kubernetes API server is not installed
>>>>>>>> either. I have noticed some scripts that should handle the
>>>>>>>> installation but I would like to know if anyone here has
>>>>>>>> had similar issues with a clean Victoria installation.
>>>>>>>>
>>>>>>>> Also should we have to install any packages in the fedora
>>>>>>>> atomic image file or should the installation requirements
>>>>>>>> be part of the stack?
>>>>>>>>
>>>>>>>> Thanks in advance for any asistance
>>>>>>>>
>>>>>>> --
>>>>>>> Cheers & Best regards,
>>>>>>> Feilong Wang (王飞龙)
>>>>>>> ------------------------------------------------------
>>>>>>> Senior Cloud Software Engineer
>>>>>>> Tel: +64-48032246
>>>>>>> Email:flwang at catalyst.net.nz <mailto:flwang at catalyst.net.nz>
>>>>>>> Catalyst IT Limited
>>>>>>> Level 6, Catalyst House,150 Willis Street, Wellington <https://www.google.com/maps/search/150+Willis+Street,+Wellington?entry=gmail&source=g>
>>>>>>> ------------------------------------------------------
>>>>>>
>>>>>> --
>>>>>> Regards,
>>>>>>
>>>>>>
>>>>>> Syed Ammad Ali
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210408/10e16370/attachment-0001.html>
More information about the openstack-discuss
mailing list