<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hi Bharat, I am on Victoria so that should satisfy the
      requirement:</p>
    <p># rpm -qa | grep -i heat<br>
      openstack-heat-api-cfn-15.0.0-1.el8.noarch<br>
      openstack-heat-api-15.0.0-1.el8.noarch<br>
      python3-heatclient-2.2.1-2.el8.noarch<br>
      openstack-heat-common-15.0.0-1.el8.noarch<br>
      openstack-heat-engine-15.0.0-1.el8.noarch<br>
      openstack-heat-ui-4.0.0-1.el8.noarch<br>
    </p>
    <p>So from what I can see during the stack's step at
      OS::Heat::SoftwareConfig is the step that gets the data right?</p>
    <p>agent_config:<br>
          type: OS::Heat::SoftwareConfig<br>
          properties:<br>
            group: ungrouped<br>
            config:<br>
              list_join:<br>
                - "\n"<br>
                -<br>
                  - str_replace:<br>
                      template: {get_file: user_data.json}<br>
                      params:<br>
                        __HOSTNAME__: {get_param: name}<br>
                        __SSH_KEY_VALUE__: {get_param: ssh_public_key}<br>
                        __OPENSTACK_CA__: {get_param: openstack_ca}<br>
                        __CONTAINER_INFRA_PREFIX__:<br>
    </p>
    <p><br>
    </p>
    <p>In the stack I can see that the step below which corresponds to
      the agent_config above and has just been initialized:<br>
    </p>
    <table id="resources" class="table table-striped datatable
      tablesorter tablesorter-default">
      <tbody>
        <tr class="ajax-update status_up odd"
          data-object-id="kube_cluster_config"
          data-update-interval="2500"
data-update-url="/dashboard/project/stacks/stack/84330fda-efe6-4b94-96da-b836b60e2586/?action=row_update&table=resources&obj_id=kube_cluster_config"
          id="resources__row__kube_cluster_config">
          <td class="sortable anchor normal_column"><a
href="https://portal.zylacloud.com/dashboard/project/stacks/stack/84330fda-efe6-4b94-96da-b836b60e2586/kube_cluster_config/">kube_cluster_config</a></td>
          <td class="sortable normal_column"><br>
          </td>
          <td class="sortable normal_column"> OS::Heat::SoftwareConfig </td>
          <td class="sortable normal_column"> 46 minutes </td>
          <td class="sortable normal_column"> Init Complete </td>
          <td class="sortable normal_column"><br>
          </td>
        </tr>
      </tbody>
    </table>
    <p>My question here would be:</p>
    <p>1- is the file the user_data</p>
    <p>2- at which step is this data aplied to the instance as from the
      fedora docs (
<a class="moz-txt-link-freetext" href="https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview">https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/#_ignition_overview</a>
      ) this step seems to be at the initial stages of the boot process</p>
    <p>Thanks in advance for any assistance<br>
    </p>
    <div class="moz-cite-prefix">On 07/04/2021 22:54, Bharat Kunwar
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:4A94086F-F79A-4EC4-8E3F-A6AE8EDF4C16@stackhpc.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      The ssh key gets injected via ignition which is why it’s not
      present in the HOT template. You need minimum train release of
      Heat for this to work however.<br>
      <br>
      <div dir="ltr">Sent from my iPhone</div>
      <div dir="ltr"><br>
        <blockquote type="cite">On 7 Apr 2021, at 21:45, Luke Camilleri
          <a class="moz-txt-link-rfc2396E" href="mailto:luke.camilleri@zylacomputing.com"><luke.camilleri@zylacomputing.com></a> wrote:<br>
          <br>
        </blockquote>
      </div>
      <blockquote type="cite">
        <div dir="ltr">
          <meta http-equiv="Content-Type" content="text/html;
            charset=UTF-8">
          <p>Hello Ammad and thanks for your assistance. I followed the
            guide and it has all the details and steps except for one
            thing, the ssh key is not being passed over to the instance,
            if I deploy an instance from that image and pass the ssh key
            it works fine but if I use the image as part of the HOT it
            lists the key as "-"</p>
          <p>Did you have this issue by any chance? Never thought I
            would be asking this question as it is a basic thing but I
            find it very strange that this is not working. I tried to
            pass the ssh key in either the template or in the cluster
            creation command but for both options the Key Name metadata
            option for the instance remains "None" when the instance is
            deployed.</p>
          <p>I then went on and checked the yaml file the resource uses
            that loads/gets the parameters
/usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml
            has the below yaml configurations:</p>
          <p>kube-master:<br>
                type: OS::Nova::Server<br>
                condition: image_based<br>
                properties:<br>
                  name: {get_param: name}<br>
                  image: {get_param: server_image}<br>
                  flavor: {get_param: master_flavor}<br>
                                                            MISSING
            ----->   key_name: {get_param: ssh_key_name}<br>
                  user_data_format: SOFTWARE_CONFIG<br>
                  software_config_transport: POLL_SERVER_HEAT<br>
                  user_data: {get_resource: agent_config}<br>
                  networks:<br>
                    - port: {get_resource: kube_master_eth0}<br>
                  scheduler_hints: { group: { get_param:
            nodes_server_group_id }}<br>
                  availability_zone: {get_param: availability_zone}<br>
          </p>
          <p>kube-master-bfv:<br>
                type: OS::Nova::Server<br>
                condition: volume_based<br>
                properties:<br>
                  name: {get_param: name}<br>
                  flavor: {get_param: master_flavor}<br>
                                                            MISSING
            ----->   key_name: {get_param: ssh_key_name}<br>
                  user_data_format: SOFTWARE_CONFIG<br>
                  software_config_transport: POLL_SERVER_HEAT<br>
                  user_data: {get_resource: agent_config}<br>
                  networks:<br>
                    - port: {get_resource: kube_master_eth0}<br>
                  scheduler_hints: { group: { get_param:
            nodes_server_group_id }}<br>
                  availability_zone: {get_param: availability_zone}<br>
                  block_device_mapping_v2:<br>
                    - boot_index: 0<br>
                      volume_id: {get_resource: kube_node_volume}<br>
          </p>
          <p>If i add the lines which show as missing, then everything
            works well and the key is actually injected in the
            kubemaster. Did anyone had this issue?<br>
          </p>
          <div class="moz-cite-prefix">On 07/04/2021 10:24, Ammad Syed
            wrote:<br>
          </div>
          <blockquote type="cite"
cite="mid:CAKOoz51UbO07fjGsGOiWfnH+JEc++UHkZN=4AS18PKbryrYB1Q@mail.gmail.com">
            <meta http-equiv="content-type" content="text/html;
              charset=UTF-8">
            <div dir="auto">Hi Luke,</div>
            <div dir="auto"><br>
            </div>
            <div dir="auto">You may refer to below guide for magnum
              installation and its template </div>
            <div dir="auto"><br>
            </div>
            <div><a
href="https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10"
                moz-do-not-send="true">https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=10</a></div>
            <div dir="auto"><br>
            </div>
            <div dir="auto">It worked pretty well for me.</div>
            <div dir="auto"><br>
            </div>
            <div dir="auto">- Ammad<br>
              <div class="gmail_quote" dir="auto">
                <div dir="ltr" class="gmail_attr">On Wed, Apr 7, 2021 at
                  5:02 AM Luke Camilleri <<a
                    href="mailto:luke.camilleri@zylacomputing.com"
                    moz-do-not-send="true">luke.camilleri@zylacomputing.com</a>>
                  wrote:<br>
                </div>
                <blockquote class="gmail_quote" style="margin:0px 0px
                  0px
0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">
                  <div>
                    <p>Thanks for your quick reply. Do you have a
                      download link for that image as I cannot find an
                      archive for the 32 release?</p>
                    <p>As for the image upload into openstack you still
                      use the fedora-atomic property right to be
                      available for coe deployments?<br>
                    </p>
                  </div>
                  <div>
                    <div>On 07/04/2021 00:03, feilong wrote:<br>
                    </div>
                    <blockquote type="cite">
                      <p>Hi Luke,</p>
                      <p>The Fedora Atomic driver has been deprecated a
                        while since the Fedora Atomic has been
                        deprecated by upstream. For now, I would suggest
                        using Fedora CoreOS <span>32.20201104.3.0</span></p>
                      <p><span
                          style="font-family:BlinkMacSystemFont,-apple-system,"Segoe
UI",Roboto,Oxygen,Ubuntu,Cantarell,"Fira
                          Sans","Droid
                          Sans","Helvetica
Neue",Helvetica,Arial,sans-serif;font-size:16px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;float:none;display:inline!important;background-color:rgb(255,255,255);color:rgb(54,54,54)">The
                          latest version of Fedora CoreOS is 33.xxx, but
                          there are something when booting based my
                          testing, see </span><span
                          style="font-family:BlinkMacSystemFont,-apple-system,"Segoe
UI",Roboto,Oxygen,Ubuntu,Cantarell,"Fira
                          Sans","Droid
                          Sans","Helvetica
Neue",Helvetica,Arial,sans-serif;font-size:16px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;float:none;display:inline!important;background-color:rgb(255,255,255);color:rgb(54,54,54)"><a
href="https://github.com/coreos/fedora-coreos-tracker/issues/735"
                            target="_blank"
                            style="font-family:BlinkMacSystemFont,-apple-system,"Segoe
UI",Roboto,Oxygen,Ubuntu,Cantarell,"Fira
                            Sans","Droid
                            Sans","Helvetica
                            Neue",Helvetica,Arial,sans-serif"
                            moz-do-not-send="true">https://github.com/coreos/fedora-coreos-tracker/issues/735</a></span></p>
                      <p>Please feel free to let me know if you have any
                        question about using Magnum. We're using
                        stable/victoria on our public cloud and it works
                        very well. I can share our public templates if
                        you want. Cheers.</p>
                      <p><br>
                      </p>
                      <p><br>
                      </p>
                      <div>On 7/04/21 9:51 am, Luke Camilleri wrote:<br>
                      </div>
                      <blockquote type="cite">
                        <div>
                          <p>We have insatlled magnum following the
                            installation guide here <a
href="https://docs.openstack.org/magnum/victoria/install/install-rdo.html"
                              target="_blank" moz-do-not-send="true">https://docs.openstack.org/magnum/victoria/install/install-rdo.html</a>
                            and the process was quite smooth but we have
                            been having some issues with the deployment
                            of the clusters.</p>
                          <p>The image being used as per the
                            documentation is <a
href="https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64"
                              target="_blank" moz-do-not-send="true">https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64</a></p>
                          <p>Our first issue was that podman was being
                            used even if we specified the
                            use_podman=false (since the image above did
                            not include podman) but this was resulting
                            in a timeout and the cluster would fail to
                            deploy. We have then installed podman in the
                            image and the cluster progressed a bit
                            further <br>
                          </p>
                          <p><i>+ echo 'WARNING Attempt 60: Trying to
                              install kubectl. Sleeping 5s'</i><i><br>
                            </i><i>+ sleep 5s</i><i><br>
                            </i><i>+ ssh -F /srv/magnum/.ssh/config
                              root@localhost '/usr/bin/podman run    
                              --entrypoint /bin/bash     --name
                              install-kubectl     --net host    
                              --privileged     --rm     --user root    
                              --volume
                              /srv/magnum/bin:/host/srv/magnum/bin     <a
href="http://k8s.gcr.io/hyperkube:v1.15.7" target="_blank"
                                moz-do-not-send="true">k8s.gcr.io/hyperkube:v1.15.7</a>    
                              -c '\''cp /usr/local/bin/kubectl
                              /host/srv/magnum/bin/kubectl'\'''</i><i><br>
                            </i><i>bash: /usr/bin/podman: No such file
                              or directory</i><i><br>
                            </i><i>ERROR Unable to install kubectl.
                              Abort.</i><i><br>
                            </i><i>+ i=61</i><i><br>
                            </i><i>+ '[' 61 -gt 60 ']'</i><i><br>
                            </i><i>+ echo 'ERROR Unable to install
                              kubectl. Abort.'</i><i><br>
                            </i><i>+ exit 1</i><br>
                          </p>
                          <p>The cluster is now failing here at "<span>kube_cluster_deploy"
                              and when checking the logs on the master
                              node we noticed the following in the log
                              files:</span></p>
                          <p><span><i>Starting to run
                                kube-apiserver-to-kubelet-role</i><i><br>
                              </i><i>Waiting for Kubernetes API...</i><i><br>
                              </i><i>+ echo 'Waiting for Kubernetes
                                API...'</i><i><br>
                              </i><i>++ curl --silent <a
                                  href="http://127.0.0.1:8080/healthz"
                                  target="_blank" moz-do-not-send="true">http://127.0.0.1:8080/healthz</a></i><i><br>
                              </i><i>+ '[' ok = '' ']'</i><i><br>
                              </i><i>+ sleep 5</i><br>
                            </span></p>
                          <p>This is because the kubernetes API server
                            is not installed either. I have noticed some
                            scripts that should handle the installation
                            but I would like to know if anyone here has
                            had similar issues with a clean Victoria
                            installation.</p>
                          Also should we have to install any packages in
                          the fedora atomic image file or should the
                          installation requirements be part of the
                          stack?</div>
                        <div><br>
                        </div>
                        <div>Thanks in advance for any asistance</div>
                        <br>
                      </blockquote>
                      <pre cols="72" style="font-family:monospace">-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: <a href="mailto:flwang@catalyst.net.nz" target="_blank" style="font-family:monospace" moz-do-not-send="true">flwang@catalyst.net.nz</a>
Catalyst IT Limited
Level 6, Catalyst House, <a href="https://www.google.com/maps/search/150+Willis+Street,+Wellington?entry=gmail&source=g" style="font-family:monospace" moz-do-not-send="true">150 Willis Street, Wellington</a>
------------------------------------------------------ </pre>
                    </blockquote>
                  </div>
                </blockquote>
              </div>
            </div>
            -- <br>
            <div dir="ltr" class="gmail_signature"
              data-smartmail="gmail_signature">Regards,
              <div><br>
              </div>
              <div><br>
              </div>
              <div>Syed Ammad Ali</div>
            </div>
          </blockquote>
        </div>
      </blockquote>
    </blockquote>
  </body>
</html>