Wallaby Magnum Issue

Bharat Kunwar bharat at stackhpc.com
Tue Aug 24 10:45:34 UTC 2021


Were master and worker nodes created? Did you log into the nodes and look at heat container agent logs under /var/log/heat-config/ ?

> On 24 Aug 2021, at 11:41, Karera Tony <tonykarera at gmail.com> wrote:
> 
> Hello Ammad,
> 
> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
> 
> Stack Faults
> as follows:
> default-master
> Timed out
> default-worker
> Timed out
> 
> 
> Regards
> 
> Tony Karera
> 
> 
> 
> 
> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83 at gmail.com <mailto:syedammad83 at gmail.com>> wrote:
> Hi Tony,
> 
> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
> 
> --fixed-network private --fixed-subnet private-subnet
> 
> You can specify above while creating a cluster.
> 
> Ammad
> 
> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera at gmail.com <mailto:tonykarera at gmail.com>> wrote:
> Hello MOhamed,
> 
> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
> 
> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ?
> Regards
> 
> Tony Karera
> 
> 
> 
> 
> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong at catalyst.net.nz <mailto:feilong at catalyst.net.nz>> wrote:
> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. 
> 
> 
> 
> On 20/08/21 5:08 pm, Mohammed Naser wrote:
>> Please keep replies on list so others can help too. 
>> 
>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. 
>> 
>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera at gmail.com <mailto:tonykarera at gmail.com>> wrote:
>> Hello Naser,
>> 
>> Please check below.
>> 
>> openstack coe cluster template create swarm-cluster-template1 \
>>                      --image fedora-atomic-latest \
>>                      --external-network External_1700\
>>                      --dns-nameserver 8.8.8.8 \
>>                      --master-flavor m1.small \
>>                      --flavor m1.small \
>>                      --coe swarm
>> openstack coe cluster create swarm-cluster \
>>                         --cluster-template swarm-cluster-template \
>>                         --master-count 1 \
>>                         --node-count 2 \
>>                         --keypair Newkey
>> 
>> Regards
>> 
>> Tony Karera
>> 
>> 
>> 
>> 
>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser at vexxhost.com <mailto:mnaser at vexxhost.com>> wrote:
>> What does your cluster template and cluster create command look like?
>> 
>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera at gmail.com <mailto:tonykarera at gmail.com>> wrote:
>> Hello Wang,
>> 
>> Thanks for the feedback.
>> 
>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
>> 
>> Regards
>> 
>> Tony Karera
>> 
>> 
>> 
>> 
>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong at catalyst.net.nz <mailto:feilong at catalyst.net.nz>> wrote:
>> Hi Karera,
>> 
>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes?
>> 
>> 
>> 
>> On 20/08/21 4:18 pm, Karera Tony wrote:
>>> Hello Team,
>>> 
>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below.
>>> 
>>> Status Reason
>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned
>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster.
>>> 
>>> Regards
>>> 
>>> Tony Karera
>>> 
>>> 
>> -- 
>> Cheers & Best regards,
>> ------------------------------------------------------------------------------
>> Feilong Wang (王飞龙) (he/him)
>> Head of Research & Development
>> 
>> Catalyst Cloud
>> Aotearoa's own
>> 
>> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz/>
>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
>> 
>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only.
>> It may contain privileged, confidential or copyright information. If you are 
>> not the named recipient, any use, reliance upon, disclosure or copying of this 
>> email or its attachments is unauthorised. If you have received this email in 
>> error, please reply via email or call +64 21 0832 6348.
>> ------------------------------------------------------------------------------
>> -- 
>> Mohammed Naser
>> VEXXHOST, Inc.
>> -- 
>> Mohammed Naser
>> VEXXHOST, Inc.
> -- 
> Cheers & Best regards,
> ------------------------------------------------------------------------------
> Feilong Wang (王飞龙) (he/him)
> Head of Research & Development
> 
> Catalyst Cloud
> Aotearoa's own
> 
> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz/>
> Level 6, 150 Willis Street, Wellington 6011, New Zealand
> 
> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only.
> It may contain privileged, confidential or copyright information. If you are 
> not the named recipient, any use, reliance upon, disclosure or copying of this 
> email or its attachments is unauthorised. If you have received this email in 
> error, please reply via email or call +64 21 0832 6348.
> ------------------------------------------------------------------------------
> 
> 
> -- 
> Regards,
> 
> 
> Syed Ammad Ali

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210824/fe8da427/attachment-0001.html>


More information about the openstack-discuss mailing list