DeaR Ammad,

I was able to make the communication work and the Worker nodes were created as well but the cluster failed.

I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.

Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping
Regards

Tony Karera




On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.

You can use below guide for the reference as well.

https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11

Ammad

On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,

I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status

image.png
Regards

Tony Karera




On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,

I actually first used that one and it was also getting stuck.

I will try this one again and update you with the Logs though.


Regards

Tony Karera




On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:

On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Sir,

Attached is the Log file

Regards

Tony Karera




On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,

Can you share us the full log file.

Ammad

On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,

Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.

image.png

Regards

Tony Karera




On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Also check out /var/log/cloud-init.log :)

On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote:
>
> Then check journalctl -xe or status of heat agent service status.
>
>
> Ammad
> On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com> wrote:
>>
>> Hello Ammad,
>>
>> There is no directory or log relevant to heat in the /var/log directory
>>
>> Regards
>>
>> Tony Karera
>>
>>
>>
>>
>> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote:
>>>
>>> Hi Karera,
>>>
>>> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating.
>>>
>>> Ammad
>>>
>>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote:
>>>>
>>>> Hello Ammad,
>>>>
>>>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
>>>>
>>>> Stack Faults
>>>> as follows:
>>>> default-master
>>>> Timed out
>>>> default-worker
>>>> Timed out
>>>>
>>>>
>>>> Regards
>>>>
>>>> Tony Karera
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
>>>>>
>>>>> Hi Tony,
>>>>>
>>>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
>>>>>
>>>>> --fixed-network private --fixed-subnet private-subnet
>>>>>
>>>>> You can specify above while creating a cluster.
>>>>>
>>>>> Ammad
>>>>>
>>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
>>>>>>
>>>>>> Hello MOhamed,
>>>>>>
>>>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
>>>>>>
>>>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ?
>>>>>> Regards
>>>>>>
>>>>>> Tony Karera
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
>>>>>>>
>>>>>>> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
>>>>>>>
>>>>>>>
>>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote:
>>>>>>>
>>>>>>> Please keep replies on list so others can help too.
>>>>>>>
>>>>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
>>>>>>>
>>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
>>>>>>>>
>>>>>>>> Hello Naser,
>>>>>>>>
>>>>>>>> Please check below.
>>>>>>>>
>>>>>>>> openstack coe cluster template create swarm-cluster-template1 \
>>>>>>>>                      --image fedora-atomic-latest \
>>>>>>>>                      --external-network External_1700\
>>>>>>>>                      --dns-nameserver 8.8.8.8 \
>>>>>>>>                      --master-flavor m1.small \
>>>>>>>>                      --flavor m1.small \
>>>>>>>>                      --coe swarm
>>>>>>>> openstack coe cluster create swarm-cluster \
>>>>>>>>                         --cluster-template swarm-cluster-template \
>>>>>>>>                         --master-count 1 \
>>>>>>>>                         --node-count 2 \
>>>>>>>>                         --keypair Newkey
>>>>>>>>
>>>>>>>> Regards
>>>>>>>>
>>>>>>>> Tony Karera
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
>>>>>>>>>
>>>>>>>>> What does your cluster template and cluster create command look like?
>>>>>>>>>
>>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Hello Wang,
>>>>>>>>>>
>>>>>>>>>> Thanks for the feedback.
>>>>>>>>>>
>>>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
>>>>>>>>>>
>>>>>>>>>> Regards
>>>>>>>>>>
>>>>>>>>>> Tony Karera
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hi Karera,
>>>>>>>>>>>
>>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hello Team,
>>>>>>>>>>>
>>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below.
>>>>>>>>>>>
>>>>>>>>>>> Status Reason
>>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned
>>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster.
>>>>>>>>>>>
>>>>>>>>>>> Regards
>>>>>>>>>>>
>>>>>>>>>>> Tony Karera
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Cheers & Best regards,
>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>> Feilong Wang (王飞龙) (he/him)
>>>>>>>>>>> Head of Research & Development
>>>>>>>>>>>
>>>>>>>>>>> Catalyst Cloud
>>>>>>>>>>> Aotearoa's own
>>>>>>>>>>>
>>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz
>>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand
>>>>>>>>>>>
>>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only.
>>>>>>>>>>> It may contain privileged, confidential or copyright information. If you are
>>>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this
>>>>>>>>>>> email or its attachments is unauthorised. If you have received this email in
>>>>>>>>>>> error, please reply via email or call +64 21 0832 6348.
>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Mohammed Naser
>>>>>>>>> VEXXHOST, Inc.
>>>>>>>
>>>>>>> --
>>>>>>> Mohammed Naser
>>>>>>> VEXXHOST, Inc.
>>>>>>>
>>>>>>> --
>>>>>>> Cheers & Best regards,
>>>>>>> ------------------------------------------------------------------------------
>>>>>>> Feilong Wang (王飞龙) (he/him)
>>>>>>> Head of Research & Development
>>>>>>>
>>>>>>> Catalyst Cloud
>>>>>>> Aotearoa's own
>>>>>>>
>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz
>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand
>>>>>>>
>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only.
>>>>>>> It may contain privileged, confidential or copyright information. If you are
>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this
>>>>>>> email or its attachments is unauthorised. If you have received this email in
>>>>>>> error, please reply via email or call +64 21 0832 6348.
>>>>>>> ------------------------------------------------------------------------------
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>>
>>>>>
>>>>> Syed Ammad Ali
>>>
>>>
>>>
>>> --
>>> Regards,
>>>
>>>
>>> Syed Ammad Ali
>
> --
> Regards,
>
>
> Syed Ammad Ali



--
Mohammed Naser
VEXXHOST, Inc.


--
Regards,


Syed Ammad Ali


--
Regards,


Syed Ammad Ali


--
Regards,


Syed Ammad Ali