Re: Wallaby Magnum Issue
Please keep replies on list so others can help too. I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Wang,
Thanks for the feedback.
Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
Regards
Tony Karera
On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote:
Hi Karera,
It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes?
On 20/08/21 4:18 pm, Karera Tony wrote:
Hello Team,
I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below.
*Status Reason ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned* Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster.
Regards
Tony Karera
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Mohammed Naser VEXXHOST, Inc.
-- Mohammed Naser VEXXHOST, Inc.
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote:
What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Wang,
Thanks for the feedback.
Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
Regards
Tony Karera
On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote:
Hi Karera,
It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes?
On 20/08/21 4:18 pm, Karera Tony wrote:
Hello Team,
I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below.
*Status Reason ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned* Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster.
Regards
Tony Karera
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Mohammed Naser VEXXHOST, Inc.
-- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Hello MOhamed, I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards Tony Karera On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Wang,
Thanks for the feedback.
Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
Regards
Tony Karera
On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote:
Hi Karera,
It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes?
On 20/08/21 4:18 pm, Karera Tony wrote:
Hello Team,
I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below.
*Status Reason ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned* Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster.
Regards
Tony Karera
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Mohammed Naser VEXXHOST, Inc.
-- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Hi Tony, You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. --fixed-network private --fixed-subnet private-subnet You can specify above while creating a cluster. Ammad On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Wang,
Thanks for the feedback.
Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
Regards
Tony Karera
On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote:
Hi Karera,
It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes?
On 20/08/21 4:18 pm, Karera Tony wrote:
Hello Team,
I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below.
*Status Reason ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned* Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster.
Regards
Tony Karera
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Mohammed Naser VEXXHOST, Inc.
-- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Regards, Syed Ammad Ali
Hello Ammad, I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below Stack Faults as follows: default-master Timed out default-worker Timed out Regards Tony Karera On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Tony,
You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Wang,
Thanks for the feedback.
Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
Regards
Tony Karera
On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote:
> Hi Karera, > > It's probably a bug. If you do have Octavia deployed, can you try to > not disable the LB and see how it goes? > > > On 20/08/21 4:18 pm, Karera Tony wrote: > > Hello Team, > > I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however > when I create a cluster I get the error below. > > > *Status Reason ERROR: Property error: : resources.api_lb.properties: > : Property allowed_cidrs not assigned* > Can someone advise on where I could be wrong. Btw, I disabled load > balancer while creating the cluster. > > Regards > > Tony Karera > > > -- > Cheers & Best regards, > ------------------------------------------------------------------------------ > Feilong Wang (王飞龙) (he/him) > Head of Research & Development > > Catalyst Cloud > Aotearoa's own > > Mob: +64 21 0832 6348 | www.catalystcloud.nz > Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> > > CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. > It may contain privileged, confidential or copyright information. If you are > not the named recipient, any use, reliance upon, disclosure or copying of this > email or its attachments is unauthorised. If you have received this email in > error, please reply via email or call +64 21 0832 6348. > ------------------------------------------------------------------------------ > > -- Mohammed Naser VEXXHOST, Inc.
-- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Regards,
Syed Ammad Ali
Hi Karera, Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. Ammad On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Tony,
You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote:
> Hello Wang, > > Thanks for the feedback. > > Unfortunately Octavia is not deployed in my environment (at least > not yet) and LB is not enabled on either the cluster template or the > cluster itself. > > Regards > > Tony Karera > > > > > On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> > wrote: > >> Hi Karera, >> >> It's probably a bug. If you do have Octavia deployed, can you try >> to not disable the LB and see how it goes? >> >> >> On 20/08/21 4:18 pm, Karera Tony wrote: >> >> Hello Team, >> >> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however >> when I create a cluster I get the error below. >> >> >> *Status Reason ERROR: Property error: : >> resources.api_lb.properties: : Property allowed_cidrs not assigned* >> Can someone advise on where I could be wrong. Btw, I disabled load >> balancer while creating the cluster. >> >> Regards >> >> Tony Karera >> >> >> -- >> Cheers & Best regards, >> ------------------------------------------------------------------------------ >> Feilong Wang (王飞龙) (he/him) >> Head of Research & Development >> >> Catalyst Cloud >> Aotearoa's own >> >> Mob: +64 21 0832 6348 | www.catalystcloud.nz >> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >> >> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >> It may contain privileged, confidential or copyright information. If you are >> not the named recipient, any use, reliance upon, disclosure or copying of this >> email or its attachments is unauthorised. If you have received this email in >> error, please reply via email or call +64 21 0832 6348. >> ------------------------------------------------------------------------------ >> >> -- Mohammed Naser VEXXHOST, Inc.
-- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Regards,
Syed Ammad Ali
-- Regards, Syed Ammad Ali
Hello Ammad, There is no directory or log relevant to heat in the /var/log directory Regards Tony Karera On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating.
Ammad
On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Tony,
You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
> What does your cluster template and cluster create command look like? > > On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> > wrote: > >> Hello Wang, >> >> Thanks for the feedback. >> >> Unfortunately Octavia is not deployed in my environment (at least >> not yet) and LB is not enabled on either the cluster template or the >> cluster itself. >> >> Regards >> >> Tony Karera >> >> >> >> >> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> >> wrote: >> >>> Hi Karera, >>> >>> It's probably a bug. If you do have Octavia deployed, can you try >>> to not disable the LB and see how it goes? >>> >>> >>> On 20/08/21 4:18 pm, Karera Tony wrote: >>> >>> Hello Team, >>> >>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however >>> when I create a cluster I get the error below. >>> >>> >>> *Status Reason ERROR: Property error: : >>> resources.api_lb.properties: : Property allowed_cidrs not assigned* >>> Can someone advise on where I could be wrong. Btw, I disabled load >>> balancer while creating the cluster. >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> -- >>> Cheers & Best regards, >>> ------------------------------------------------------------------------------ >>> Feilong Wang (王飞龙) (he/him) >>> Head of Research & Development >>> >>> Catalyst Cloud >>> Aotearoa's own >>> >>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>> >>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>> It may contain privileged, confidential or copyright information. If you are >>> not the named recipient, any use, reliance upon, disclosure or copying of this >>> email or its attachments is unauthorised. If you have received this email in >>> error, please reply via email or call +64 21 0832 6348. >>> ------------------------------------------------------------------------------ >>> >>> -- > Mohammed Naser > VEXXHOST, Inc. > -- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Then check journalctl -xe or status of heat agent service status. Ammad On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
There is no directory or log relevant to heat in the /var/log directory
Regards
Tony Karera
On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating.
Ammad
On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Tony,
You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
> Hello Naser, > > Please check below. > > openstack coe cluster template create swarm-cluster-template1 \ > --image fedora-atomic-latest \ > --external-network External_1700\ > --dns-nameserver 8.8.8.8 \ > --master-flavor m1.small \ > --flavor m1.small \ > --coe swarm > openstack coe cluster create swarm-cluster \ > --cluster-template swarm-cluster-template \ > --master-count 1 \ > --node-count 2 \ > --keypair Newkey > > Regards > > Tony Karera > > > > > On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> > wrote: > >> What does your cluster template and cluster create command look >> like? >> >> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> >> wrote: >> >>> Hello Wang, >>> >>> Thanks for the feedback. >>> >>> Unfortunately Octavia is not deployed in my environment (at least >>> not yet) and LB is not enabled on either the cluster template or the >>> cluster itself. >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> >>> wrote: >>> >>>> Hi Karera, >>>> >>>> It's probably a bug. If you do have Octavia deployed, can you try >>>> to not disable the LB and see how it goes? >>>> >>>> >>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>> >>>> Hello Team, >>>> >>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, >>>> however when I create a cluster I get the error below. >>>> >>>> >>>> *Status Reason ERROR: Property error: : >>>> resources.api_lb.properties: : Property allowed_cidrs not assigned* >>>> Can someone advise on where I could be wrong. Btw, I disabled >>>> load balancer while creating the cluster. >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> -- >>>> Cheers & Best regards, >>>> ------------------------------------------------------------------------------ >>>> Feilong Wang (王飞龙) (he/him) >>>> Head of Research & Development >>>> >>>> Catalyst Cloud >>>> Aotearoa's own >>>> >>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>> >>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>> It may contain privileged, confidential or copyright information. If you are >>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>> email or its attachments is unauthorised. If you have received this email in >>>> error, please reply via email or call +64 21 0832 6348. >>>> ------------------------------------------------------------------------------ >>>> >>>> -- >> Mohammed Naser >> VEXXHOST, Inc. >> > -- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Also check out /var/log/cloud-init.log :) On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote:
Then check journalctl -xe or status of heat agent service status.
Ammad On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
There is no directory or log relevant to heat in the /var/log directory
Regards
Tony Karera
On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating.
Ammad
On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Tony,
You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote: > > Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. > > > On 20/08/21 5:08 pm, Mohammed Naser wrote: > > Please keep replies on list so others can help too. > > I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. > > On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote: >> >> Hello Naser, >> >> Please check below. >> >> openstack coe cluster template create swarm-cluster-template1 \ >> --image fedora-atomic-latest \ >> --external-network External_1700\ >> --dns-nameserver 8.8.8.8 \ >> --master-flavor m1.small \ >> --flavor m1.small \ >> --coe swarm >> openstack coe cluster create swarm-cluster \ >> --cluster-template swarm-cluster-template \ >> --master-count 1 \ >> --node-count 2 \ >> --keypair Newkey >> >> Regards >> >> Tony Karera >> >> >> >> >> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote: >>> >>> What does your cluster template and cluster create command look like? >>> >>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote: >>>> >>>> Hello Wang, >>>> >>>> Thanks for the feedback. >>>> >>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote: >>>>> >>>>> Hi Karera, >>>>> >>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>> >>>>> >>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>> >>>>> Hello Team, >>>>> >>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>> >>>>> Status Reason >>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> -- >>>>> Cheers & Best regards, >>>>> ------------------------------------------------------------------------------ >>>>> Feilong Wang (王飞龙) (he/him) >>>>> Head of Research & Development >>>>> >>>>> Catalyst Cloud >>>>> Aotearoa's own >>>>> >>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>> >>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>> It may contain privileged, confidential or copyright information. If you are >>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>> email or its attachments is unauthorised. If you have received this email in >>>>> error, please reply via email or call +64 21 0832 6348. >>>>> ------------------------------------------------------------------------------ >>> >>> -- >>> Mohammed Naser >>> VEXXHOST, Inc. > > -- > Mohammed Naser > VEXXHOST, Inc. > > -- > Cheers & Best regards, > ------------------------------------------------------------------------------ > Feilong Wang (王飞龙) (he/him) > Head of Research & Development > > Catalyst Cloud > Aotearoa's own > > Mob: +64 21 0832 6348 | www.catalystcloud.nz > Level 6, 150 Willis Street, Wellington 6011, New Zealand > > CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. > It may contain privileged, confidential or copyright information. If you are > not the named recipient, any use, reliance upon, disclosure or copying of this > email or its attachments is unauthorised. If you have received this email in > error, please reply via email or call +64 21 0832 6348. > ------------------------------------------------------------------------------
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
Hello Guys, Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing. [image: image.png] Regards Tony Karera On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote:
Then check journalctl -xe or status of heat agent service status.
Ammad On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com>
Hello Ammad,
There is no directory or log relevant to heat in the /var/log directory
Regards
Tony Karera
On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com>
wrote:
Hi Karera,
Login to master node and check the logs of heat agent in var log.
There must be something the cluster is stucking somewhere in creating.
Ammad
On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com>
wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The
master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com>
wrote:
Hi Tony,
You can try by creating your private vxlan network prior to
deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com>
wrote:
> > Hello MOhamed, > > I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. > > When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? > Regards > > Tony Karera > > > > > On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote: >> >> Oooh, are you using Swarm? I don't think that driver is well
>> >> >> On 20/08/21 5:08 pm, Mohammed Naser wrote: >> >> Please keep replies on list so others can help too. >> >> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >> >> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote: >>> >>> Hello Naser, >>> >>> Please check below. >>> >>> openstack coe cluster template create swarm-cluster-template1 \ >>> --image fedora-atomic-latest \ >>> --external-network External_1700\ >>> --dns-nameserver 8.8.8.8 \ >>> --master-flavor m1.small \ >>> --flavor m1.small \ >>> --coe swarm >>> openstack coe cluster create swarm-cluster \ >>> --cluster-template swarm-cluster-template \ >>> --master-count 1 \ >>> --node-count 2 \ >>> --keypair Newkey >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < mnaser@vexxhost.com> wrote: >>>> >>>> What does your cluster template and cluster create command look
>>>> >>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>> >>>>> Hello Wang, >>>>> >>>>> Thanks for the feedback. >>>>> >>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>> >>>>>> Hi Karera, >>>>>> >>>>>> It's probably a bug. If you do have Octavia deployed, can you
wrote: maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. like? try to not disable the LB and see how it goes?
>>>>>> >>>>>> >>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>> >>>>>> Hello Team, >>>>>> >>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>> >>>>>> Status Reason >>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>> >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> -- >>>>>> Cheers & Best regards, >>>>>>
>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>> Head of Research & Development >>>>>> >>>>>> Catalyst Cloud >>>>>> Aotearoa's own >>>>>> >>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>> >>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>
>>>> >>>> -- >>>> Mohammed Naser >>>> VEXXHOST, Inc. >> >> -- >> Mohammed Naser >> VEXXHOST, Inc. >> >> -- >> Cheers & Best regards, >>
>> Feilong Wang (王飞龙) (he/him) >> Head of Research & Development >> >> Catalyst Cloud >> Aotearoa's own >> >> Mob: +64 21 0832 6348 | www.catalystcloud.nz >> Level 6, 150 Willis Street, Wellington 6011, New Zealand >> >> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >> It may contain privileged, confidential or copyright information. If you are >> not the named recipient, any use, reliance upon, disclosure or copying of this >> email or its attachments is unauthorised. If you have received this email in >> error, please reply via email or call +64 21 0832 6348. >>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
Hi Karera, Can you share us the full log file. Ammad On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
[image: image.png]
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote:
Then check journalctl -xe or status of heat agent service status.
Ammad On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com>
Hello Ammad,
There is no directory or log relevant to heat in the /var/log directory
Regards
Tony Karera
On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com>
wrote:
Hi Karera,
Login to master node and check the logs of heat agent in var log.
There must be something the cluster is stucking somewhere in creating.
Ammad
On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com>
wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The
master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com>
wrote:
> > Hi Tony, > > You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. > > --fixed-network private --fixed-subnet private-subnet > > You can specify above while creating a cluster. > > Ammad > > On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote: >> >> Hello MOhamed, >> >> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >> >> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >> Regards >> >> Tony Karera >> >> >> >> >> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote: >>> >>> Oooh, are you using Swarm? I don't think that driver is well
>>> >>> >>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>> >>> Please keep replies on list so others can help too. >>> >>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>> >>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote: >>>> >>>> Hello Naser, >>>> >>>> Please check below. >>>> >>>> openstack coe cluster template create swarm-cluster-template1 \ >>>> --image fedora-atomic-latest \ >>>> --external-network External_1700\ >>>> --dns-nameserver 8.8.8.8 \ >>>> --master-flavor m1.small \ >>>> --flavor m1.small \ >>>> --coe swarm >>>> openstack coe cluster create swarm-cluster \ >>>> --cluster-template swarm-cluster-template \ >>>> --master-count 1 \ >>>> --node-count 2 \ >>>> --keypair Newkey >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < mnaser@vexxhost.com> wrote: >>>>> >>>>> What does your cluster template and cluster create command look
>>>>> >>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>> >>>>>> Hello Wang, >>>>>> >>>>>> Thanks for the feedback. >>>>>> >>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>> >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>>> >>>>>>> Hi Karera, >>>>>>> >>>>>>> It's probably a bug. If you do have Octavia deployed, can you
wrote: maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. like? try to not disable the LB and see how it goes?
>>>>>>> >>>>>>> >>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>> >>>>>>> Hello Team, >>>>>>> >>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>> >>>>>>> Status Reason >>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Tony Karera >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>>
>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>> Head of Research & Development >>>>>>> >>>>>>> Catalyst Cloud >>>>>>> Aotearoa's own >>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>
>>>>> >>>>> -- >>>>> Mohammed Naser >>>>> VEXXHOST, Inc. >>> >>> -- >>> Mohammed Naser >>> VEXXHOST, Inc. >>> >>> -- >>> Cheers & Best regards, >>>
>>> Feilong Wang (王飞龙) (he/him) >>> Head of Research & Development >>> >>> Catalyst Cloud >>> Aotearoa's own >>> >>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>> >>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>> It may contain privileged, confidential or copyright information. If you are >>> not the named recipient, any use, reliance upon, disclosure or copying of this >>> email or its attachments is unauthorised. If you have received this email in >>> error, please reply via email or call +64 21 0832 6348. >>>
> > > > -- > Regards, > > > Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
-- Regards, Syed Ammad Ali
Hello Sir, Attached is the Log file Regards Tony Karera On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
[image: image.png]
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote:
Then check journalctl -xe or status of heat agent service status.
Ammad On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com>
Hello Ammad,
There is no directory or log relevant to heat in the /var/log
Regards
Tony Karera
On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com>
wrote:
Hi Karera,
Login to master node and check the logs of heat agent in var log.
There must be something the cluster is stucking somewhere in creating.
Ammad
On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com>
wrote:
> > Hello Ammad, > > I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below > > Stack Faults > as follows: > default-master > Timed out > default-worker > Timed out > > > Regards > > Tony Karera > > > > > On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote: >> >> Hi Tony, >> >> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >> >> --fixed-network private --fixed-subnet private-subnet >> >> You can specify above while creating a cluster. >> >> Ammad >> >> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote: >>> >>> Hello MOhamed, >>> >>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>> >>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote: >>>> >>>> Oooh, are you using Swarm? I don't think that driver is well
wrote: directory maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
>>>> >>>> >>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>> >>>> Please keep replies on list so others can help too. >>>> >>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>>> >>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>> >>>>> Hello Naser, >>>>> >>>>> Please check below. >>>>> >>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>> --image fedora-atomic-latest \ >>>>> --external-network External_1700\ >>>>> --dns-nameserver 8.8.8.8 \ >>>>> --master-flavor m1.small \ >>>>> --flavor m1.small \ >>>>> --coe swarm >>>>> openstack coe cluster create swarm-cluster \ >>>>> --cluster-template swarm-cluster-template \ >>>>> --master-count 1 \ >>>>> --node-count 2 \ >>>>> --keypair Newkey >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < mnaser@vexxhost.com> wrote: >>>>>> >>>>>> What does your cluster template and cluster create command look like? >>>>>> >>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>>> >>>>>>> Hello Wang, >>>>>>> >>>>>>> Thanks for the feedback. >>>>>>> >>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Tony Karera >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>>>> >>>>>>>> Hi Karera, >>>>>>>> >>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>>>> >>>>>>>> >>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>> >>>>>>>> Hello Team, >>>>>>>> >>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>>> >>>>>>>> Status Reason >>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Cheers & Best regards, >>>>>>>>
>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>> Head of Research & Development >>>>>>>> >>>>>>>> Catalyst Cloud >>>>>>>> Aotearoa's own >>>>>>>> >>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>>>> >>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>>
>>>>>> >>>>>> -- >>>>>> Mohammed Naser >>>>>> VEXXHOST, Inc. >>>> >>>> -- >>>> Mohammed Naser >>>> VEXXHOST, Inc. >>>> >>>> -- >>>> Cheers & Best regards, >>>>
>>>> Feilong Wang (王飞龙) (he/him) >>>> Head of Research & Development >>>> >>>> Catalyst Cloud >>>> Aotearoa's own >>>> >>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>> >>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>> It may contain privileged, confidential or copyright information. If you are >>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>> email or its attachments is unauthorised. If you have received this email in >>>> error, please reply via email or call +64 21 0832 6348. >>>>
>> >> >> >> -- >> Regards, >> >> >> Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
-- Regards,
Syed Ammad Ali
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image. https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... Ammad On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
[image: image.png]
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote:
Then check journalctl -xe or status of heat agent service status.
Ammad On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com>
Hello Ammad,
There is no directory or log relevant to heat in the /var/log
Regards
Tony Karera
On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com>
wrote:
> > Hi Karera, > > Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. > > Ammad > > On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote: >> >> Hello Ammad, >> >> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >> >> Stack Faults >> as follows: >> default-master >> Timed out >> default-worker >> Timed out >> >> >> Regards >> >> Tony Karera >> >> >> >> >> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote: >>> >>> Hi Tony, >>> >>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>> >>> --fixed-network private --fixed-subnet private-subnet >>> >>> You can specify above while creating a cluster. >>> >>> Ammad >>> >>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>> >>>> Hello MOhamed, >>>> >>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>>> >>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote: >>>>> >>>>> Oooh, are you using Swarm? I don't think that driver is well
wrote: directory maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
>>>>> >>>>> >>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>> >>>>> Please keep replies on list so others can help too. >>>>> >>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>>>> >>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>> >>>>>> Hello Naser, >>>>>> >>>>>> Please check below. >>>>>> >>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>>> --image fedora-atomic-latest \ >>>>>> --external-network External_1700\ >>>>>> --dns-nameserver 8.8.8.8 \ >>>>>> --master-flavor m1.small \ >>>>>> --flavor m1.small \ >>>>>> --coe swarm >>>>>> openstack coe cluster create swarm-cluster \ >>>>>> --cluster-template swarm-cluster-template \ >>>>>> --master-count 1 \ >>>>>> --node-count 2 \ >>>>>> --keypair Newkey >>>>>> >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < mnaser@vexxhost.com> wrote: >>>>>>> >>>>>>> What does your cluster template and cluster create command look like? >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>>>> >>>>>>>> Hello Wang, >>>>>>>> >>>>>>>> Thanks for the feedback. >>>>>>>> >>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>>>>> >>>>>>>>> Hi Karera, >>>>>>>>> >>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>>>>> >>>>>>>>> >>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>> >>>>>>>>> Hello Team, >>>>>>>>> >>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>>>> >>>>>>>>> Status Reason >>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Tony Karera >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Cheers & Best regards, >>>>>>>>>
>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>> Head of Research & Development >>>>>>>>> >>>>>>>>> Catalyst Cloud >>>>>>>>> Aotearoa's own >>>>>>>>> >>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>>>>> >>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>>>
>>>>>>> >>>>>>> -- >>>>>>> Mohammed Naser >>>>>>> VEXXHOST, Inc. >>>>> >>>>> -- >>>>> Mohammed Naser >>>>> VEXXHOST, Inc. >>>>> >>>>> -- >>>>> Cheers & Best regards, >>>>>
>>>>> Feilong Wang (王飞龙) (he/him) >>>>> Head of Research & Development >>>>> >>>>> Catalyst Cloud >>>>> Aotearoa's own >>>>> >>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>> >>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>> It may contain privileged, confidential or copyright information. If you are >>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>> email or its attachments is unauthorised. If you have received this email in >>>>> error, please reply via email or call +64 21 0832 6348. >>>>>
>>> >>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali > > > > -- > Regards, > > > Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
-- Regards,
Syed Ammad Ali
-- Regards, Syed Ammad Ali
Hello Ammad, I actually first used that one and it was also getting stuck. I will try this one again and update you with the Logs though. Regards Tony Karera On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
[image: image.png]
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote:
Then check journalctl -xe or status of heat agent service status.
Ammad On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com>
> > Hello Ammad, > > There is no directory or log relevant to heat in the /var/log
> > Regards > > Tony Karera > > > > > On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote: >> >> Hi Karera, >> >> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. >> >> Ammad >> >> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote: >>> >>> Hello Ammad, >>> >>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >>> >>> Stack Faults >>> as follows: >>> default-master >>> Timed out >>> default-worker >>> Timed out >>> >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote: >>>> >>>> Hi Tony, >>>> >>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>>> >>>> --fixed-network private --fixed-subnet private-subnet >>>> >>>> You can specify above while creating a cluster. >>>> >>>> Ammad >>>> >>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>> >>>>> Hello MOhamed, >>>>> >>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>>>> >>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>> >>>>>> Oooh, are you using Swarm? I don't think that driver is well
>>>>>> >>>>>> >>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>> >>>>>> Please keep replies on list so others can help too. >>>>>> >>>>>> I don’t know how well tested the Swarm driver is at this
wrote: directory maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. point. I believe most Magnum users are using it for Kubernetes only.
>>>>>> >>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>>> >>>>>>> Hello Naser, >>>>>>> >>>>>>> Please check below. >>>>>>> >>>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>>>> --image fedora-atomic-latest \ >>>>>>> --external-network External_1700\ >>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>> --master-flavor m1.small \ >>>>>>> --flavor m1.small \ >>>>>>> --coe swarm >>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>> --cluster-template swarm-cluster-template \ >>>>>>> --master-count 1 \ >>>>>>> --node-count 2 \ >>>>>>> --keypair Newkey >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Tony Karera >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < mnaser@vexxhost.com> wrote: >>>>>>>> >>>>>>>> What does your cluster template and cluster create command look like? >>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>>>>> >>>>>>>>> Hello Wang, >>>>>>>>> >>>>>>>>> Thanks for the feedback. >>>>>>>>> >>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Tony Karera >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>>>>>> >>>>>>>>>> Hi Karera, >>>>>>>>>> >>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>> >>>>>>>>>> Hello Team, >>>>>>>>>> >>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>> >>>>>>>>>> Status Reason >>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Cheers & Best regards, >>>>>>>>>>
>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>> Head of Research & Development >>>>>>>>>> >>>>>>>>>> Catalyst Cloud >>>>>>>>>> Aotearoa's own >>>>>>>>>> >>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>>>>>> >>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>>>>
>>>>>>>> >>>>>>>> -- >>>>>>>> Mohammed Naser >>>>>>>> VEXXHOST, Inc. >>>>>> >>>>>> -- >>>>>> Mohammed Naser >>>>>> VEXXHOST, Inc. >>>>>> >>>>>> -- >>>>>> Cheers & Best regards, >>>>>>
>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>> Head of Research & Development >>>>>> >>>>>> Catalyst Cloud >>>>>> Aotearoa's own >>>>>> >>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>> >>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>
>>>> >>>> >>>> >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali >> >> >> >> -- >> Regards, >> >> >> Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Hello Ammad, I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status [image: image.png] Regards Tony Karera On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
[image: image.png]
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote: > > Then check journalctl -xe or status of heat agent service status. > > > Ammad > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com> wrote: >> >> Hello Ammad, >> >> There is no directory or log relevant to heat in the /var/log directory >> >> Regards >> >> Tony Karera >> >> >> >> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote: >>> >>> Hi Karera, >>> >>> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. >>> >>> Ammad >>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote: >>>> >>>> Hello Ammad, >>>> >>>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >>>> >>>> Stack Faults >>>> as follows: >>>> default-master >>>> Timed out >>>> default-worker >>>> Timed out >>>> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < syedammad83@gmail.com> wrote: >>>>> >>>>> Hi Tony, >>>>> >>>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>> >>>>> You can specify above while creating a cluster. >>>>> >>>>> Ammad >>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>> >>>>>> Hello MOhamed, >>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>>>>> >>>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. >>>>>>> >>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>>>> >>>>>>>> Hello Naser, >>>>>>>> >>>>>>>> Please check below. >>>>>>>> >>>>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>>>>> --image fedora-atomic-latest \ >>>>>>>> --external-network External_1700\ >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>> --master-flavor m1.small \ >>>>>>>> --flavor m1.small \ >>>>>>>> --coe swarm >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>> --cluster-template swarm-cluster-template \ >>>>>>>> --master-count 1 \ >>>>>>>> --node-count 2 \ >>>>>>>> --keypair Newkey >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < mnaser@vexxhost.com> wrote: >>>>>>>>> >>>>>>>>> What does your cluster template and cluster create command look like? >>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < feilong@catalyst.net.nz> wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>> Head of Research & Development >>>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>>> Aotearoa's own >>>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Mohammed Naser >>>>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Mohammed Naser >>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>> Head of Research & Development >>>>>>> >>>>>>> Catalyst Cloud >>>>>>> Aotearoa's own >>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>> ------------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>> >>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali > > -- > Regards, > > > Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes. You can use below guide for the reference as well. https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11 Ammad On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
[image: image.png] Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
[image: image.png]
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote:
> Also check out /var/log/cloud-init.log :) > > On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> > wrote: > > > > Then check journalctl -xe or status of heat agent service status. > > > > > > Ammad > > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com> > wrote: > >> > >> Hello Ammad, > >> > >> There is no directory or log relevant to heat in the /var/log > directory > >> > >> Regards > >> > >> Tony Karera > >> > >> > >> > >> > >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < > syedammad83@gmail.com> wrote: > >>> > >>> Hi Karera, > >>> > >>> Login to master node and check the logs of heat agent in var > log. There must be something the cluster is stucking somewhere in creating. > >>> > >>> Ammad > >>> > >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < > tonykarera@gmail.com> wrote: > >>>> > >>>> Hello Ammad, > >>>> > >>>> I had done as explained and it worked upto a certain point. The > master node was created but the cluster remained in Creation in progress > for over an hour and failed with error below > >>>> > >>>> Stack Faults > >>>> as follows: > >>>> default-master > >>>> Timed out > >>>> default-worker > >>>> Timed out > >>>> > >>>> > >>>> Regards > >>>> > >>>> Tony Karera > >>>> > >>>> > >>>> > >>>> > >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < > syedammad83@gmail.com> wrote: > >>>>> > >>>>> Hi Tony, > >>>>> > >>>>> You can try by creating your private vxlan network prior to > deployment of cluster and explicitly create your cluster in vxlan network. > >>>>> > >>>>> --fixed-network private --fixed-subnet private-subnet > >>>>> > >>>>> You can specify above while creating a cluster. > >>>>> > >>>>> Ammad > >>>>> > >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < > tonykarera@gmail.com> wrote: > >>>>>> > >>>>>> Hello MOhamed, > >>>>>> > >>>>>> I think the Kubernetes cluster is ok but it when I deploy it, > It creates a fixed network using vlan which I am not using for internal > networks. > >>>>>> > >>>>>> When I create a a vxlan Network and use it in the cluster > creation, It fails. Is there a trick around this ? > >>>>>> Regards > >>>>>> > >>>>>> Tony Karera > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < > feilong@catalyst.net.nz> wrote: > >>>>>>> > >>>>>>> Oooh, are you using Swarm? I don't think that driver is well > maintained. I didn't see any interest in the last 4 years since I involved > in the Magnum project. If there is no specific reason, I would suggest go > for k8s. > >>>>>>> > >>>>>>> > >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: > >>>>>>> > >>>>>>> Please keep replies on list so others can help too. > >>>>>>> > >>>>>>> I don’t know how well tested the Swarm driver is at this > point. I believe most Magnum users are using it for Kubernetes only. > >>>>>>> > >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < > tonykarera@gmail.com> wrote: > >>>>>>>> > >>>>>>>> Hello Naser, > >>>>>>>> > >>>>>>>> Please check below. > >>>>>>>> > >>>>>>>> openstack coe cluster template create > swarm-cluster-template1 \ > >>>>>>>> --image fedora-atomic-latest \ > >>>>>>>> --external-network External_1700\ > >>>>>>>> --dns-nameserver 8.8.8.8 \ > >>>>>>>> --master-flavor m1.small \ > >>>>>>>> --flavor m1.small \ > >>>>>>>> --coe swarm > >>>>>>>> openstack coe cluster create swarm-cluster \ > >>>>>>>> --cluster-template > swarm-cluster-template \ > >>>>>>>> --master-count 1 \ > >>>>>>>> --node-count 2 \ > >>>>>>>> --keypair Newkey > >>>>>>>> > >>>>>>>> Regards > >>>>>>>> > >>>>>>>> Tony Karera > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < > mnaser@vexxhost.com> wrote: > >>>>>>>>> > >>>>>>>>> What does your cluster template and cluster create command > look like? > >>>>>>>>> > >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < > tonykarera@gmail.com> wrote: > >>>>>>>>>> > >>>>>>>>>> Hello Wang, > >>>>>>>>>> > >>>>>>>>>> Thanks for the feedback. > >>>>>>>>>> > >>>>>>>>>> Unfortunately Octavia is not deployed in my environment > (at least not yet) and LB is not enabled on either the cluster template or > the cluster itself. > >>>>>>>>>> > >>>>>>>>>> Regards > >>>>>>>>>> > >>>>>>>>>> Tony Karera > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < > feilong@catalyst.net.nz> wrote: > >>>>>>>>>>> > >>>>>>>>>>> Hi Karera, > >>>>>>>>>>> > >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, > can you try to not disable the LB and see how it goes? > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: > >>>>>>>>>>> > >>>>>>>>>>> Hello Team, > >>>>>>>>>>> > >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled > Magum, however when I create a cluster I get the error below. > >>>>>>>>>>> > >>>>>>>>>>> Status Reason > >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : > Property allowed_cidrs not assigned > >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I > disabled load balancer while creating the cluster. > >>>>>>>>>>> > >>>>>>>>>>> Regards > >>>>>>>>>>> > >>>>>>>>>>> Tony Karera > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> -- > >>>>>>>>>>> Cheers & Best regards, > >>>>>>>>>>> > ------------------------------------------------------------------------------ > >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) > >>>>>>>>>>> Head of Research & Development > >>>>>>>>>>> > >>>>>>>>>>> Catalyst Cloud > >>>>>>>>>>> Aotearoa's own > >>>>>>>>>>> > >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz > >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand > >>>>>>>>>>> > >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the > named recipients only. > >>>>>>>>>>> It may contain privileged, confidential or copyright > information. If you are > >>>>>>>>>>> not the named recipient, any use, reliance upon, > disclosure or copying of this > >>>>>>>>>>> email or its attachments is unauthorised. If you have > received this email in > >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. > >>>>>>>>>>> > ------------------------------------------------------------------------------ > >>>>>>>>> > >>>>>>>>> -- > >>>>>>>>> Mohammed Naser > >>>>>>>>> VEXXHOST, Inc. > >>>>>>> > >>>>>>> -- > >>>>>>> Mohammed Naser > >>>>>>> VEXXHOST, Inc. > >>>>>>> > >>>>>>> -- > >>>>>>> Cheers & Best regards, > >>>>>>> > ------------------------------------------------------------------------------ > >>>>>>> Feilong Wang (王飞龙) (he/him) > >>>>>>> Head of Research & Development > >>>>>>> > >>>>>>> Catalyst Cloud > >>>>>>> Aotearoa's own > >>>>>>> > >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz > >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand > >>>>>>> > >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named > recipients only. > >>>>>>> It may contain privileged, confidential or copyright > information. If you are > >>>>>>> not the named recipient, any use, reliance upon, disclosure > or copying of this > >>>>>>> email or its attachments is unauthorised. If you have > received this email in > >>>>>>> error, please reply via email or call +64 21 0832 6348. > >>>>>>> > ------------------------------------------------------------------------------ > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> Regards, > >>>>> > >>>>> > >>>>> Syed Ammad Ali > >>> > >>> > >>> > >>> -- > >>> Regards, > >>> > >>> > >>> Syed Ammad Ali > > > > -- > > Regards, > > > > > > Syed Ammad Ali > > > > -- > Mohammed Naser > VEXXHOST, Inc. >
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards, Syed Ammad Ali
DeaR Ammad, I was able to make the communication work and the Worker nodes were created as well but the cluster failed. I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed. Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards Tony Karera On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
[image: image.png] Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote:
> Hello Guys, > > Thanks a lot for the help but unfortunately I dont see much > information in the log file indicating a failure apart from the log that > keeps appearing. > > [image: image.png] > > Regards > > Tony Karera > > > > > On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> > wrote: > >> Also check out /var/log/cloud-init.log :) >> >> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> >> wrote: >> > >> > Then check journalctl -xe or status of heat agent service status. >> > >> > >> > Ammad >> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >> tonykarera@gmail.com> wrote: >> >> >> >> Hello Ammad, >> >> >> >> There is no directory or log relevant to heat in the /var/log >> directory >> >> >> >> Regards >> >> >> >> Tony Karera >> >> >> >> >> >> >> >> >> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >> syedammad83@gmail.com> wrote: >> >>> >> >>> Hi Karera, >> >>> >> >>> Login to master node and check the logs of heat agent in var >> log. There must be something the cluster is stucking somewhere in creating. >> >>> >> >>> Ammad >> >>> >> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >> tonykarera@gmail.com> wrote: >> >>>> >> >>>> Hello Ammad, >> >>>> >> >>>> I had done as explained and it worked upto a certain point. >> The master node was created but the cluster remained in Creation in >> progress for over an hour and failed with error below >> >>>> >> >>>> Stack Faults >> >>>> as follows: >> >>>> default-master >> >>>> Timed out >> >>>> default-worker >> >>>> Timed out >> >>>> >> >>>> >> >>>> Regards >> >>>> >> >>>> Tony Karera >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >> syedammad83@gmail.com> wrote: >> >>>>> >> >>>>> Hi Tony, >> >>>>> >> >>>>> You can try by creating your private vxlan network prior to >> deployment of cluster and explicitly create your cluster in vxlan network. >> >>>>> >> >>>>> --fixed-network private --fixed-subnet private-subnet >> >>>>> >> >>>>> You can specify above while creating a cluster. >> >>>>> >> >>>>> Ammad >> >>>>> >> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >> tonykarera@gmail.com> wrote: >> >>>>>> >> >>>>>> Hello MOhamed, >> >>>>>> >> >>>>>> I think the Kubernetes cluster is ok but it when I deploy >> it, It creates a fixed network using vlan which I am not using for internal >> networks. >> >>>>>> >> >>>>>> When I create a a vxlan Network and use it in the cluster >> creation, It fails. Is there a trick around this ? >> >>>>>> Regards >> >>>>>> >> >>>>>> Tony Karera >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >> feilong@catalyst.net.nz> wrote: >> >>>>>>> >> >>>>>>> Oooh, are you using Swarm? I don't think that driver is >> well maintained. I didn't see any interest in the last 4 years since I >> involved in the Magnum project. If there is no specific reason, I would >> suggest go for k8s. >> >>>>>>> >> >>>>>>> >> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >> >>>>>>> >> >>>>>>> Please keep replies on list so others can help too. >> >>>>>>> >> >>>>>>> I don’t know how well tested the Swarm driver is at this >> point. I believe most Magnum users are using it for Kubernetes only. >> >>>>>>> >> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >> tonykarera@gmail.com> wrote: >> >>>>>>>> >> >>>>>>>> Hello Naser, >> >>>>>>>> >> >>>>>>>> Please check below. >> >>>>>>>> >> >>>>>>>> openstack coe cluster template create >> swarm-cluster-template1 \ >> >>>>>>>> --image fedora-atomic-latest \ >> >>>>>>>> --external-network External_1700\ >> >>>>>>>> --dns-nameserver 8.8.8.8 \ >> >>>>>>>> --master-flavor m1.small \ >> >>>>>>>> --flavor m1.small \ >> >>>>>>>> --coe swarm >> >>>>>>>> openstack coe cluster create swarm-cluster \ >> >>>>>>>> --cluster-template >> swarm-cluster-template \ >> >>>>>>>> --master-count 1 \ >> >>>>>>>> --node-count 2 \ >> >>>>>>>> --keypair Newkey >> >>>>>>>> >> >>>>>>>> Regards >> >>>>>>>> >> >>>>>>>> Tony Karera >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >> mnaser@vexxhost.com> wrote: >> >>>>>>>>> >> >>>>>>>>> What does your cluster template and cluster create >> command look like? >> >>>>>>>>> >> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >> tonykarera@gmail.com> wrote: >> >>>>>>>>>> >> >>>>>>>>>> Hello Wang, >> >>>>>>>>>> >> >>>>>>>>>> Thanks for the feedback. >> >>>>>>>>>> >> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment >> (at least not yet) and LB is not enabled on either the cluster template or >> the cluster itself. >> >>>>>>>>>> >> >>>>>>>>>> Regards >> >>>>>>>>>> >> >>>>>>>>>> Tony Karera >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >> feilong@catalyst.net.nz> wrote: >> >>>>>>>>>>> >> >>>>>>>>>>> Hi Karera, >> >>>>>>>>>>> >> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, >> can you try to not disable the LB and see how it goes? >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >> >>>>>>>>>>> >> >>>>>>>>>>> Hello Team, >> >>>>>>>>>>> >> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled >> Magum, however when I create a cluster I get the error below. >> >>>>>>>>>>> >> >>>>>>>>>>> Status Reason >> >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : >> Property allowed_cidrs not assigned >> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I >> disabled load balancer while creating the cluster. >> >>>>>>>>>>> >> >>>>>>>>>>> Regards >> >>>>>>>>>>> >> >>>>>>>>>>> Tony Karera >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> -- >> >>>>>>>>>>> Cheers & Best regards, >> >>>>>>>>>>> >> ------------------------------------------------------------------------------ >> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >> >>>>>>>>>>> Head of Research & Development >> >>>>>>>>>>> >> >>>>>>>>>>> Catalyst Cloud >> >>>>>>>>>>> Aotearoa's own >> >>>>>>>>>>> >> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >> >>>>>>>>>>> >> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >> named recipients only. >> >>>>>>>>>>> It may contain privileged, confidential or copyright >> information. If you are >> >>>>>>>>>>> not the named recipient, any use, reliance upon, >> disclosure or copying of this >> >>>>>>>>>>> email or its attachments is unauthorised. If you have >> received this email in >> >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >> >>>>>>>>>>> >> ------------------------------------------------------------------------------ >> >>>>>>>>> >> >>>>>>>>> -- >> >>>>>>>>> Mohammed Naser >> >>>>>>>>> VEXXHOST, Inc. >> >>>>>>> >> >>>>>>> -- >> >>>>>>> Mohammed Naser >> >>>>>>> VEXXHOST, Inc. >> >>>>>>> >> >>>>>>> -- >> >>>>>>> Cheers & Best regards, >> >>>>>>> >> ------------------------------------------------------------------------------ >> >>>>>>> Feilong Wang (王飞龙) (he/him) >> >>>>>>> Head of Research & Development >> >>>>>>> >> >>>>>>> Catalyst Cloud >> >>>>>>> Aotearoa's own >> >>>>>>> >> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >> >>>>>>> >> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >> named recipients only. >> >>>>>>> It may contain privileged, confidential or copyright >> information. If you are >> >>>>>>> not the named recipient, any use, reliance upon, disclosure >> or copying of this >> >>>>>>> email or its attachments is unauthorised. If you have >> received this email in >> >>>>>>> error, please reply via email or call +64 21 0832 6348. >> >>>>>>> >> ------------------------------------------------------------------------------ >> >>>>> >> >>>>> >> >>>>> >> >>>>> -- >> >>>>> Regards, >> >>>>> >> >>>>> >> >>>>> Syed Ammad Ali >> >>> >> >>> >> >>> >> >>> -- >> >>> Regards, >> >>> >> >>> >> >>> Syed Ammad Ali >> > >> > -- >> > Regards, >> > >> > >> > Syed Ammad Ali >> >> >> >> -- >> Mohammed Naser >> VEXXHOST, Inc. >> >
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
I’d check the logs under /var/log/heat-config. Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote: Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote: Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote: Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote: It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote: Hello Sir,
Attached is the Log file
Regards
Tony Karera
> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote: > Hi Karera, > > Can you share us the full log file. > > Ammad > >> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote: >> Hello Guys, >> >> Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing. >> >> <image.png> >> >> >> Regards >> >> Tony Karera >> >> >> >> >>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote: >>> Also check out /var/log/cloud-init.log :) >>> >>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote: >>> > >>> > Then check journalctl -xe or status of heat agent service status. >>> > >>> > >>> > Ammad >>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com> wrote: >>> >> >>> >> Hello Ammad, >>> >> >>> >> There is no directory or log relevant to heat in the /var/log directory >>> >> >>> >> Regards >>> >> >>> >> Tony Karera >>> >> >>> >> >>> >> >>> >> >>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote: >>> >>> >>> >>> Hi Karera, >>> >>> >>> >>> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. >>> >>> >>> >>> Ammad >>> >>> >>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote: >>> >>>> >>> >>>> Hello Ammad, >>> >>>> >>> >>>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >>> >>>> >>> >>>> Stack Faults >>> >>>> as follows: >>> >>>> default-master >>> >>>> Timed out >>> >>>> default-worker >>> >>>> Timed out >>> >>>> >>> >>>> >>> >>>> Regards >>> >>>> >>> >>>> Tony Karera >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote: >>> >>>>> >>> >>>>> Hi Tony, >>> >>>>> >>> >>>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>> >>>>> >>> >>>>> --fixed-network private --fixed-subnet private-subnet >>> >>>>> >>> >>>>> You can specify above while creating a cluster. >>> >>>>> >>> >>>>> Ammad >>> >>>>> >>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote: >>> >>>>>> >>> >>>>>> Hello MOhamed, >>> >>>>>> >>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>> >>>>>> >>> >>>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>> >>>>>> Regards >>> >>>>>> >>> >>>>>> Tony Karera >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote: >>> >>>>>>> >>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>> >>>>>>> >>> >>>>>>> Please keep replies on list so others can help too. >>> >>>>>>> >>> >>>>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>> >>>>>>> >>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote: >>> >>>>>>>> >>> >>>>>>>> Hello Naser, >>> >>>>>>>> >>> >>>>>>>> Please check below. >>> >>>>>>>> >>> >>>>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>> >>>>>>>> --image fedora-atomic-latest \ >>> >>>>>>>> --external-network External_1700\ >>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>> >>>>>>>> --master-flavor m1.small \ >>> >>>>>>>> --flavor m1.small \ >>> >>>>>>>> --coe swarm >>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>> >>>>>>>> --cluster-template swarm-cluster-template \ >>> >>>>>>>> --master-count 1 \ >>> >>>>>>>> --node-count 2 \ >>> >>>>>>>> --keypair Newkey >>> >>>>>>>> >>> >>>>>>>> Regards >>> >>>>>>>> >>> >>>>>>>> Tony Karera >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote: >>> >>>>>>>>> >>> >>>>>>>>> What does your cluster template and cluster create command look like? >>> >>>>>>>>> >>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote: >>> >>>>>>>>>> >>> >>>>>>>>>> Hello Wang, >>> >>>>>>>>>> >>> >>>>>>>>>> Thanks for the feedback. >>> >>>>>>>>>> >>> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>> >>>>>>>>>> >>> >>>>>>>>>> Regards >>> >>>>>>>>>> >>> >>>>>>>>>> Tony Karera >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote: >>> >>>>>>>>>>> >>> >>>>>>>>>>> Hi Karera, >>> >>>>>>>>>>> >>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>> >>>>>>>>>>> >>> >>>>>>>>>>> Hello Team, >>> >>>>>>>>>>> >>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>> >>>>>>>>>>> >>> >>>>>>>>>>> Status Reason >>> >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>> >>>>>>>>>>> >>> >>>>>>>>>>> Regards >>> >>>>>>>>>>> >>> >>>>>>>>>>> Tony Karera >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> -- >>> >>>>>>>>>>> Cheers & Best regards, >>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>> >>>>>>>>>>> Head of Research & Development >>> >>>>>>>>>>> >>> >>>>>>>>>>> Catalyst Cloud >>> >>>>>>>>>>> Aotearoa's own >>> >>>>>>>>>>> >>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>> >>>>>>>>>>> >>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>> >>>>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>> >>>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>> >>>>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>> >>>>>>>>> >>> >>>>>>>>> -- >>> >>>>>>>>> Mohammed Naser >>> >>>>>>>>> VEXXHOST, Inc. >>> >>>>>>> >>> >>>>>>> -- >>> >>>>>>> Mohammed Naser >>> >>>>>>> VEXXHOST, Inc. >>> >>>>>>> >>> >>>>>>> -- >>> >>>>>>> Cheers & Best regards, >>> >>>>>>> ------------------------------------------------------------------------------ >>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>> >>>>>>> Head of Research & Development >>> >>>>>>> >>> >>>>>>> Catalyst Cloud >>> >>>>>>> Aotearoa's own >>> >>>>>>> >>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>> >>>>>>> >>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>> >>>>>>> It may contain privileged, confidential or copyright information. If you are >>> >>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>> >>>>>>> email or its attachments is unauthorised. If you have received this email in >>> >>>>>>> error, please reply via email or call +64 21 0832 6348. >>> >>>>>>> ------------------------------------------------------------------------------ >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> -- >>> >>>>> Regards, >>> >>>>> >>> >>>>> >>> >>>>> Syed Ammad Ali >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> Regards, >>> >>> >>> >>> >>> >>> Syed Ammad Ali >>> > >>> > -- >>> > Regards, >>> > >>> > >>> > Syed Ammad Ali >>> >>> >>> >>> -- >>> Mohammed Naser >>> VEXXHOST, Inc. > > > -- > Regards, > > > Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Dear Sir, You are right. I am getting this error kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? Regards Tony Karera On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote:
> Hi Karera, > > Can you share us the full log file. > > Ammad > > On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> > wrote: > >> Hello Guys, >> >> Thanks a lot for the help but unfortunately I dont see much >> information in the log file indicating a failure apart from the log that >> keeps appearing. >> >> <image.png> >> >> >> Regards >> >> Tony Karera >> >> >> >> >> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> >> wrote: >> >>> Also check out /var/log/cloud-init.log :) >>> >>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> >>> wrote: >>> > >>> > Then check journalctl -xe or status of heat agent service status. >>> > >>> > >>> > Ammad >>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>> tonykarera@gmail.com> wrote: >>> >> >>> >> Hello Ammad, >>> >> >>> >> There is no directory or log relevant to heat in the /var/log >>> directory >>> >> >>> >> Regards >>> >> >>> >> Tony Karera >>> >> >>> >> >>> >> >>> >> >>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>> syedammad83@gmail.com> wrote: >>> >>> >>> >>> Hi Karera, >>> >>> >>> >>> Login to master node and check the logs of heat agent in var >>> log. There must be something the cluster is stucking somewhere in creating. >>> >>> >>> >>> Ammad >>> >>> >>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>> tonykarera@gmail.com> wrote: >>> >>>> >>> >>>> Hello Ammad, >>> >>>> >>> >>>> I had done as explained and it worked upto a certain point. >>> The master node was created but the cluster remained in Creation in >>> progress for over an hour and failed with error below >>> >>>> >>> >>>> Stack Faults >>> >>>> as follows: >>> >>>> default-master >>> >>>> Timed out >>> >>>> default-worker >>> >>>> Timed out >>> >>>> >>> >>>> >>> >>>> Regards >>> >>>> >>> >>>> Tony Karera >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>> syedammad83@gmail.com> wrote: >>> >>>>> >>> >>>>> Hi Tony, >>> >>>>> >>> >>>>> You can try by creating your private vxlan network prior to >>> deployment of cluster and explicitly create your cluster in vxlan network. >>> >>>>> >>> >>>>> --fixed-network private --fixed-subnet private-subnet >>> >>>>> >>> >>>>> You can specify above while creating a cluster. >>> >>>>> >>> >>>>> Ammad >>> >>>>> >>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>> tonykarera@gmail.com> wrote: >>> >>>>>> >>> >>>>>> Hello MOhamed, >>> >>>>>> >>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy >>> it, It creates a fixed network using vlan which I am not using for internal >>> networks. >>> >>>>>> >>> >>>>>> When I create a a vxlan Network and use it in the cluster >>> creation, It fails. Is there a trick around this ? >>> >>>>>> Regards >>> >>>>>> >>> >>>>>> Tony Karera >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>> feilong@catalyst.net.nz> wrote: >>> >>>>>>> >>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is >>> well maintained. I didn't see any interest in the last 4 years since I >>> involved in the Magnum project. If there is no specific reason, I would >>> suggest go for k8s. >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>> >>>>>>> >>> >>>>>>> Please keep replies on list so others can help too. >>> >>>>>>> >>> >>>>>>> I don’t know how well tested the Swarm driver is at this >>> point. I believe most Magnum users are using it for Kubernetes only. >>> >>>>>>> >>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>> tonykarera@gmail.com> wrote: >>> >>>>>>>> >>> >>>>>>>> Hello Naser, >>> >>>>>>>> >>> >>>>>>>> Please check below. >>> >>>>>>>> >>> >>>>>>>> openstack coe cluster template create >>> swarm-cluster-template1 \ >>> >>>>>>>> --image fedora-atomic-latest \ >>> >>>>>>>> --external-network External_1700\ >>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>> >>>>>>>> --master-flavor m1.small \ >>> >>>>>>>> --flavor m1.small \ >>> >>>>>>>> --coe swarm >>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>> >>>>>>>> --cluster-template >>> swarm-cluster-template \ >>> >>>>>>>> --master-count 1 \ >>> >>>>>>>> --node-count 2 \ >>> >>>>>>>> --keypair Newkey >>> >>>>>>>> >>> >>>>>>>> Regards >>> >>>>>>>> >>> >>>>>>>> Tony Karera >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>> mnaser@vexxhost.com> wrote: >>> >>>>>>>>> >>> >>>>>>>>> What does your cluster template and cluster create >>> command look like? >>> >>>>>>>>> >>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>> tonykarera@gmail.com> wrote: >>> >>>>>>>>>> >>> >>>>>>>>>> Hello Wang, >>> >>>>>>>>>> >>> >>>>>>>>>> Thanks for the feedback. >>> >>>>>>>>>> >>> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment >>> (at least not yet) and LB is not enabled on either the cluster template or >>> the cluster itself. >>> >>>>>>>>>> >>> >>>>>>>>>> Regards >>> >>>>>>>>>> >>> >>>>>>>>>> Tony Karera >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>> feilong@catalyst.net.nz> wrote: >>> >>>>>>>>>>> >>> >>>>>>>>>>> Hi Karera, >>> >>>>>>>>>>> >>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, >>> can you try to not disable the LB and see how it goes? >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>> >>>>>>>>>>> >>> >>>>>>>>>>> Hello Team, >>> >>>>>>>>>>> >>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled >>> Magum, however when I create a cluster I get the error below. >>> >>>>>>>>>>> >>> >>>>>>>>>>> Status Reason >>> >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: >>> : Property allowed_cidrs not assigned >>> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I >>> disabled load balancer while creating the cluster. >>> >>>>>>>>>>> >>> >>>>>>>>>>> Regards >>> >>>>>>>>>>> >>> >>>>>>>>>>> Tony Karera >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> -- >>> >>>>>>>>>>> Cheers & Best regards, >>> >>>>>>>>>>> >>> ------------------------------------------------------------------------------ >>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>> >>>>>>>>>>> Head of Research & Development >>> >>>>>>>>>>> >>> >>>>>>>>>>> Catalyst Cloud >>> >>>>>>>>>>> Aotearoa's own >>> >>>>>>>>>>> >>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>> Zealand >>> >>>>>>>>>>> >>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >>> named recipients only. >>> >>>>>>>>>>> It may contain privileged, confidential or copyright >>> information. If you are >>> >>>>>>>>>>> not the named recipient, any use, reliance upon, >>> disclosure or copying of this >>> >>>>>>>>>>> email or its attachments is unauthorised. If you have >>> received this email in >>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>> >>>>>>>>>>> >>> ------------------------------------------------------------------------------ >>> >>>>>>>>> >>> >>>>>>>>> -- >>> >>>>>>>>> Mohammed Naser >>> >>>>>>>>> VEXXHOST, Inc. >>> >>>>>>> >>> >>>>>>> -- >>> >>>>>>> Mohammed Naser >>> >>>>>>> VEXXHOST, Inc. >>> >>>>>>> >>> >>>>>>> -- >>> >>>>>>> Cheers & Best regards, >>> >>>>>>> >>> ------------------------------------------------------------------------------ >>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>> >>>>>>> Head of Research & Development >>> >>>>>>> >>> >>>>>>> Catalyst Cloud >>> >>>>>>> Aotearoa's own >>> >>>>>>> >>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>> >>>>>>> >>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >>> named recipients only. >>> >>>>>>> It may contain privileged, confidential or copyright >>> information. If you are >>> >>>>>>> not the named recipient, any use, reliance upon, >>> disclosure or copying of this >>> >>>>>>> email or its attachments is unauthorised. If you have >>> received this email in >>> >>>>>>> error, please reply via email or call +64 21 0832 6348. >>> >>>>>>> >>> ------------------------------------------------------------------------------ >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> -- >>> >>>>> Regards, >>> >>>>> >>> >>>>> >>> >>>>> Syed Ammad Ali >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> Regards, >>> >>> >>> >>> >>> >>> Syed Ammad Ali >>> > >>> > -- >>> > Regards, >>> > >>> > >>> > Syed Ammad Ali >>> >>> >>> >>> -- >>> Mohammed Naser >>> VEXXHOST, Inc. >>> >> > > -- > Regards, > > > Syed Ammad Ali >
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Hello Guys, Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards Tony Karera On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote:
> Hello Sir, > > Attached is the Log file > > Regards > > Tony Karera > > > > > On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> > wrote: > >> Hi Karera, >> >> Can you share us the full log file. >> >> Ammad >> >> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> >> wrote: >> >>> Hello Guys, >>> >>> Thanks a lot for the help but unfortunately I dont see much >>> information in the log file indicating a failure apart from the log that >>> keeps appearing. >>> >>> <image.png> >>> >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>> mnaser@vexxhost.com> wrote: >>> >>>> Also check out /var/log/cloud-init.log :) >>>> >>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> >>>> wrote: >>>> > >>>> > Then check journalctl -xe or status of heat agent service >>>> status. >>>> > >>>> > >>>> > Ammad >>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>> tonykarera@gmail.com> wrote: >>>> >> >>>> >> Hello Ammad, >>>> >> >>>> >> There is no directory or log relevant to heat in the /var/log >>>> directory >>>> >> >>>> >> Regards >>>> >> >>>> >> Tony Karera >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>> syedammad83@gmail.com> wrote: >>>> >>> >>>> >>> Hi Karera, >>>> >>> >>>> >>> Login to master node and check the logs of heat agent in var >>>> log. There must be something the cluster is stucking somewhere in creating. >>>> >>> >>>> >>> Ammad >>>> >>> >>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>> tonykarera@gmail.com> wrote: >>>> >>>> >>>> >>>> Hello Ammad, >>>> >>>> >>>> >>>> I had done as explained and it worked upto a certain point. >>>> The master node was created but the cluster remained in Creation in >>>> progress for over an hour and failed with error below >>>> >>>> >>>> >>>> Stack Faults >>>> >>>> as follows: >>>> >>>> default-master >>>> >>>> Timed out >>>> >>>> default-worker >>>> >>>> Timed out >>>> >>>> >>>> >>>> >>>> >>>> Regards >>>> >>>> >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>> syedammad83@gmail.com> wrote: >>>> >>>>> >>>> >>>>> Hi Tony, >>>> >>>>> >>>> >>>>> You can try by creating your private vxlan network prior to >>>> deployment of cluster and explicitly create your cluster in vxlan network. >>>> >>>>> >>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>> >>>>> >>>> >>>>> You can specify above while creating a cluster. >>>> >>>>> >>>> >>>>> Ammad >>>> >>>>> >>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>> tonykarera@gmail.com> wrote: >>>> >>>>>> >>>> >>>>>> Hello MOhamed, >>>> >>>>>> >>>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy >>>> it, It creates a fixed network using vlan which I am not using for internal >>>> networks. >>>> >>>>>> >>>> >>>>>> When I create a a vxlan Network and use it in the cluster >>>> creation, It fails. Is there a trick around this ? >>>> >>>>>> Regards >>>> >>>>>> >>>> >>>>>> Tony Karera >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>> feilong@catalyst.net.nz> wrote: >>>> >>>>>>> >>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is >>>> well maintained. I didn't see any interest in the last 4 years since I >>>> involved in the Magnum project. If there is no specific reason, I would >>>> suggest go for k8s. >>>> >>>>>>> >>>> >>>>>>> >>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>> >>>>>>> >>>> >>>>>>> Please keep replies on list so others can help too. >>>> >>>>>>> >>>> >>>>>>> I don’t know how well tested the Swarm driver is at this >>>> point. I believe most Magnum users are using it for Kubernetes only. >>>> >>>>>>> >>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>> tonykarera@gmail.com> wrote: >>>> >>>>>>>> >>>> >>>>>>>> Hello Naser, >>>> >>>>>>>> >>>> >>>>>>>> Please check below. >>>> >>>>>>>> >>>> >>>>>>>> openstack coe cluster template create >>>> swarm-cluster-template1 \ >>>> >>>>>>>> --image fedora-atomic-latest \ >>>> >>>>>>>> --external-network External_1700\ >>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>> >>>>>>>> --master-flavor m1.small \ >>>> >>>>>>>> --flavor m1.small \ >>>> >>>>>>>> --coe swarm >>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>> >>>>>>>> --cluster-template >>>> swarm-cluster-template \ >>>> >>>>>>>> --master-count 1 \ >>>> >>>>>>>> --node-count 2 \ >>>> >>>>>>>> --keypair Newkey >>>> >>>>>>>> >>>> >>>>>>>> Regards >>>> >>>>>>>> >>>> >>>>>>>> Tony Karera >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>> mnaser@vexxhost.com> wrote: >>>> >>>>>>>>> >>>> >>>>>>>>> What does your cluster template and cluster create >>>> command look like? >>>> >>>>>>>>> >>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>> tonykarera@gmail.com> wrote: >>>> >>>>>>>>>> >>>> >>>>>>>>>> Hello Wang, >>>> >>>>>>>>>> >>>> >>>>>>>>>> Thanks for the feedback. >>>> >>>>>>>>>> >>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>> environment (at least not yet) and LB is not enabled on either the cluster >>>> template or the cluster itself. >>>> >>>>>>>>>> >>>> >>>>>>>>>> Regards >>>> >>>>>>>>>> >>>> >>>>>>>>>> Tony Karera >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>> feilong@catalyst.net.nz> wrote: >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> Hi Karera, >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, >>>> can you try to not disable the LB and see how it goes? >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> Hello Team, >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled >>>> Magum, however when I create a cluster I get the error below. >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> Status Reason >>>> >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: >>>> : Property allowed_cidrs not assigned >>>> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I >>>> disabled load balancer while creating the cluster. >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> Regards >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> Tony Karera >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> -- >>>> >>>>>>>>>>> Cheers & Best regards, >>>> >>>>>>>>>>> >>>> ------------------------------------------------------------------------------ >>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>> >>>>>>>>>>> Head of Research & Development >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> Catalyst Cloud >>>> >>>>>>>>>>> Aotearoa's own >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>> Zealand >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>> the named recipients only. >>>> >>>>>>>>>>> It may contain privileged, confidential or copyright >>>> information. If you are >>>> >>>>>>>>>>> not the named recipient, any use, reliance upon, >>>> disclosure or copying of this >>>> >>>>>>>>>>> email or its attachments is unauthorised. If you have >>>> received this email in >>>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 >>>> 6348. >>>> >>>>>>>>>>> >>>> ------------------------------------------------------------------------------ >>>> >>>>>>>>> >>>> >>>>>>>>> -- >>>> >>>>>>>>> Mohammed Naser >>>> >>>>>>>>> VEXXHOST, Inc. >>>> >>>>>>> >>>> >>>>>>> -- >>>> >>>>>>> Mohammed Naser >>>> >>>>>>> VEXXHOST, Inc. >>>> >>>>>>> >>>> >>>>>>> -- >>>> >>>>>>> Cheers & Best regards, >>>> >>>>>>> >>>> ------------------------------------------------------------------------------ >>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>> >>>>>>> Head of Research & Development >>>> >>>>>>> >>>> >>>>>>> Catalyst Cloud >>>> >>>>>>> Aotearoa's own >>>> >>>>>>> >>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>> >>>>>>> >>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >>>> named recipients only. >>>> >>>>>>> It may contain privileged, confidential or copyright >>>> information. If you are >>>> >>>>>>> not the named recipient, any use, reliance upon, >>>> disclosure or copying of this >>>> >>>>>>> email or its attachments is unauthorised. If you have >>>> received this email in >>>> >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>> >>>>>>> >>>> ------------------------------------------------------------------------------ >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> -- >>>> >>>>> Regards, >>>> >>>>> >>>> >>>>> >>>> >>>>> Syed Ammad Ali >>>> >>> >>>> >>> >>>> >>> >>>> >>> -- >>>> >>> Regards, >>>> >>> >>>> >>> >>>> >>> Syed Ammad Ali >>>> > >>>> > -- >>>> > Regards, >>>> > >>>> > >>>> > Syed Ammad Ali >>>> >>>> >>>> >>>> -- >>>> Mohammed Naser >>>> VEXXHOST, Inc. >>>> >>> >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> >
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top. Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote: Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote: I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote: Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote: Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote: > Hello Ammad, > > I actually first used that one and it was also getting stuck. > > I will try this one again and update you with the Logs though. > > > Regards > > Tony Karera > > > > >> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote: >> It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image. >> >> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >> >> Ammad >> >>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> wrote: >>> Hello Sir, >>> >>> Attached is the Log file >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> wrote: >>>> Hi Karera, >>>> >>>> Can you share us the full log file. >>>> >>>> Ammad >>>> >>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> wrote: >>>>> Hello Guys, >>>>> >>>>> Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing. >>>>> >>>>> <image.png> >>>>> >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com> wrote: >>>>>> Also check out /var/log/cloud-init.log :) >>>>>> >>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com> wrote: >>>>>> > >>>>>> > Then check journalctl -xe or status of heat agent service status. >>>>>> > >>>>>> > >>>>>> > Ammad >>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com> wrote: >>>>>> >> >>>>>> >> Hello Ammad, >>>>>> >> >>>>>> >> There is no directory or log relevant to heat in the /var/log directory >>>>>> >> >>>>>> >> Regards >>>>>> >> >>>>>> >> Tony Karera >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com> wrote: >>>>>> >>> >>>>>> >>> Hi Karera, >>>>>> >>> >>>>>> >>> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. >>>>>> >>> >>>>>> >>> Ammad >>>>>> >>> >>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com> wrote: >>>>>> >>>> >>>>>> >>>> Hello Ammad, >>>>>> >>>> >>>>>> >>>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >>>>>> >>>> >>>>>> >>>> Stack Faults >>>>>> >>>> as follows: >>>>>> >>>> default-master >>>>>> >>>> Timed out >>>>>> >>>> default-worker >>>>>> >>>> Timed out >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> Regards >>>>>> >>>> >>>>>> >>>> Tony Karera >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote: >>>>>> >>>>> >>>>>> >>>>> Hi Tony, >>>>>> >>>>> >>>>>> >>>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>>>>> >>>>> >>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>> >>>>> >>>>>> >>>>> You can specify above while creating a cluster. >>>>>> >>>>> >>>>>> >>>>> Ammad >>>>>> >>>>> >>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hello MOhamed, >>>>>> >>>>>> >>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>>>>> >>>>>> >>>>>> >>>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>>>>> >>>>>> Regards >>>>>> >>>>>> >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote: >>>>>> >>>>>>> >>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. >>>>>> >>>>>>> >>>>>> >>>>>>> >>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>> >>>>>>> >>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>> >>>>>>> >>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>>>>> >>>>>>> >>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote: >>>>>> >>>>>>>> >>>>>> >>>>>>>> Hello Naser, >>>>>> >>>>>>>> >>>>>> >>>>>>>> Please check below. >>>>>> >>>>>>>> >>>>>> >>>>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>>> >>>>>>>> --external-network External_1700\ >>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>> >>>>>>>> --flavor m1.small \ >>>>>> >>>>>>>> --coe swarm >>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>> >>>>>>>> --cluster-template swarm-cluster-template \ >>>>>> >>>>>>>> --master-count 1 \ >>>>>> >>>>>>>> --node-count 2 \ >>>>>> >>>>>>>> --keypair Newkey >>>>>> >>>>>>>> >>>>>> >>>>>>>> Regards >>>>>> >>>>>>>> >>>>>> >>>>>>>> Tony Karera >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote: >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> What does your cluster template and cluster create command look like? >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote: >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Hello Wang, >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Regards >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Tony Karera >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> wrote: >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Hi Karera, >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Hello Team, >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Status Reason >>>>>> >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Regards >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Tony Karera >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> -- >>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>> >>>>>>>>>>> Head of Research & Development >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>> >>>>>>>>>>> Aotearoa's own >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>> >>>>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>> >>>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> -- >>>>>> >>>>>>>>> Mohammed Naser >>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>> >>>>>>> >>>>>> >>>>>>> -- >>>>>> >>>>>>> Mohammed Naser >>>>>> >>>>>>> VEXXHOST, Inc. >>>>>> >>>>>>> >>>>>> >>>>>>> -- >>>>>> >>>>>>> Cheers & Best regards, >>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>> >>>>>>> Head of Research & Development >>>>>> >>>>>>> >>>>>> >>>>>>> Catalyst Cloud >>>>>> >>>>>>> Aotearoa's own >>>>>> >>>>>>> >>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>> >>>>>>> >>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>> >>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>> >>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>> >>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>> >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> -- >>>>>> >>>>> Regards, >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> Syed Ammad Ali >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> -- >>>>>> >>> Regards, >>>>>> >>> >>>>>> >>> >>>>>> >>> Syed Ammad Ali >>>>>> > >>>>>> > -- >>>>>> > Regards, >>>>>> > >>>>>> > >>>>>> > Syed Ammad Ali >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Mohammed Naser >>>>>> VEXXHOST, Inc. >>>> >>>> >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali >> >> >> -- >> Regards, >> >> >> Syed Ammad Ali
-- Regards,
Syed Ammad Ali
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log> <6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
Here is the beginning of the Log Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' Regards Tony Karera On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
> It seems from the logs that you are using fedora atomic. Can you try > with FCOS 32 image. > > > https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... > > Ammad > > On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> > wrote: > >> Hello Sir, >> >> Attached is the Log file >> >> Regards >> >> Tony Karera >> >> >> >> >> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> >> wrote: >> >>> Hi Karera, >>> >>> Can you share us the full log file. >>> >>> Ammad >>> >>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> >>> wrote: >>> >>>> Hello Guys, >>>> >>>> Thanks a lot for the help but unfortunately I dont see much >>>> information in the log file indicating a failure apart from the log that >>>> keeps appearing. >>>> >>>> <image.png> >>>> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>> mnaser@vexxhost.com> wrote: >>>> >>>>> Also check out /var/log/cloud-init.log :) >>>>> >>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>> syedammad83@gmail.com> wrote: >>>>> > >>>>> > Then check journalctl -xe or status of heat agent service >>>>> status. >>>>> > >>>>> > >>>>> > Ammad >>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >> >>>>> >> Hello Ammad, >>>>> >> >>>>> >> There is no directory or log relevant to heat in the /var/log >>>>> directory >>>>> >> >>>>> >> Regards >>>>> >> >>>>> >> Tony Karera >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>> syedammad83@gmail.com> wrote: >>>>> >>> >>>>> >>> Hi Karera, >>>>> >>> >>>>> >>> Login to master node and check the logs of heat agent in var >>>>> log. There must be something the cluster is stucking somewhere in creating. >>>>> >>> >>>>> >>> Ammad >>>>> >>> >>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>> >>>>> >>>> Hello Ammad, >>>>> >>>> >>>>> >>>> I had done as explained and it worked upto a certain point. >>>>> The master node was created but the cluster remained in Creation in >>>>> progress for over an hour and failed with error below >>>>> >>>> >>>>> >>>> Stack Faults >>>>> >>>> as follows: >>>>> >>>> default-master >>>>> >>>> Timed out >>>>> >>>> default-worker >>>>> >>>> Timed out >>>>> >>>> >>>>> >>>> >>>>> >>>> Regards >>>>> >>>> >>>>> >>>> Tony Karera >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>> syedammad83@gmail.com> wrote: >>>>> >>>>> >>>>> >>>>> Hi Tony, >>>>> >>>>> >>>>> >>>>> You can try by creating your private vxlan network prior >>>>> to deployment of cluster and explicitly create your cluster in vxlan >>>>> network. >>>>> >>>>> >>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>> >>>>> >>>>> >>>>> You can specify above while creating a cluster. >>>>> >>>>> >>>>> >>>>> Ammad >>>>> >>>>> >>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>>>> >>>>> >>>>>> Hello MOhamed, >>>>> >>>>>> >>>>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy >>>>> it, It creates a fixed network using vlan which I am not using for internal >>>>> networks. >>>>> >>>>>> >>>>> >>>>>> When I create a a vxlan Network and use it in the cluster >>>>> creation, It fails. Is there a trick around this ? >>>>> >>>>>> Regards >>>>> >>>>>> >>>>> >>>>>> Tony Karera >>>>> >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>> feilong@catalyst.net.nz> wrote: >>>>> >>>>>>> >>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is >>>>> well maintained. I didn't see any interest in the last 4 years since I >>>>> involved in the Magnum project. If there is no specific reason, I would >>>>> suggest go for k8s. >>>>> >>>>>>> >>>>> >>>>>>> >>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>> >>>>>>> >>>>> >>>>>>> Please keep replies on list so others can help too. >>>>> >>>>>>> >>>>> >>>>>>> I don’t know how well tested the Swarm driver is at this >>>>> point. I believe most Magnum users are using it for Kubernetes only. >>>>> >>>>>>> >>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>>>>>> >>>>> >>>>>>>> Hello Naser, >>>>> >>>>>>>> >>>>> >>>>>>>> Please check below. >>>>> >>>>>>>> >>>>> >>>>>>>> openstack coe cluster template create >>>>> swarm-cluster-template1 \ >>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>> >>>>>>>> --external-network External_1700\ >>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>> >>>>>>>> --master-flavor m1.small \ >>>>> >>>>>>>> --flavor m1.small \ >>>>> >>>>>>>> --coe swarm >>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>> >>>>>>>> --cluster-template >>>>> swarm-cluster-template \ >>>>> >>>>>>>> --master-count 1 \ >>>>> >>>>>>>> --node-count 2 \ >>>>> >>>>>>>> --keypair Newkey >>>>> >>>>>>>> >>>>> >>>>>>>> Regards >>>>> >>>>>>>> >>>>> >>>>>>>> Tony Karera >>>>> >>>>>>>> >>>>> >>>>>>>> >>>>> >>>>>>>> >>>>> >>>>>>>> >>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>> mnaser@vexxhost.com> wrote: >>>>> >>>>>>>>> >>>>> >>>>>>>>> What does your cluster template and cluster create >>>>> command look like? >>>>> >>>>>>>>> >>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> Hello Wang, >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> Thanks for the feedback. >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>> template or the cluster itself. >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> Regards >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> Tony Karera >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>> feilong@catalyst.net.nz> wrote: >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> Hi Karera, >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>> deployed, can you try to not disable the LB and see how it goes? >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> Hello Team, >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled >>>>> Magum, however when I create a cluster I get the error below. >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> Status Reason >>>>> >>>>>>>>>>> ERROR: Property error: : >>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I >>>>> disabled load balancer while creating the cluster. >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> Regards >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> Tony Karera >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> -- >>>>> >>>>>>>>>>> Cheers & Best regards, >>>>> >>>>>>>>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>> >>>>>>>>>>> Head of Research & Development >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> Catalyst Cloud >>>>> >>>>>>>>>>> Aotearoa's own >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>> Zealand >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>> the named recipients only. >>>>> >>>>>>>>>>> It may contain privileged, confidential or copyright >>>>> information. If you are >>>>> >>>>>>>>>>> not the named recipient, any use, reliance upon, >>>>> disclosure or copying of this >>>>> >>>>>>>>>>> email or its attachments is unauthorised. If you >>>>> have received this email in >>>>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 >>>>> 6348. >>>>> >>>>>>>>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>>>>>> >>>>> >>>>>>>>> -- >>>>> >>>>>>>>> Mohammed Naser >>>>> >>>>>>>>> VEXXHOST, Inc. >>>>> >>>>>>> >>>>> >>>>>>> -- >>>>> >>>>>>> Mohammed Naser >>>>> >>>>>>> VEXXHOST, Inc. >>>>> >>>>>>> >>>>> >>>>>>> -- >>>>> >>>>>>> Cheers & Best regards, >>>>> >>>>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>> >>>>>>> Head of Research & Development >>>>> >>>>>>> >>>>> >>>>>>> Catalyst Cloud >>>>> >>>>>>> Aotearoa's own >>>>> >>>>>>> >>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>> >>>>>>> >>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >>>>> named recipients only. >>>>> >>>>>>> It may contain privileged, confidential or copyright >>>>> information. If you are >>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>> disclosure or copying of this >>>>> >>>>>>> email or its attachments is unauthorised. If you have >>>>> received this email in >>>>> >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>> >>>>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> Regards, >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Syed Ammad Ali >>>>> >>> >>>>> >>> >>>>> >>> >>>>> >>> -- >>>>> >>> Regards, >>>>> >>> >>>>> >>> >>>>> >>> Syed Ammad Ali >>>>> > >>>>> > -- >>>>> > Regards, >>>>> > >>>>> > >>>>> > Syed Ammad Ali >>>>> >>>>> >>>>> >>>>> -- >>>>> Mohammed Naser >>>>> VEXXHOST, Inc. >>>>> >>>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali >>> >> > > -- > Regards, > > > Syed Ammad Ali >
-- Regards,
Syed Ammad Ali
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
<6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete. There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed. Can you check the response of "podman ps" command on master nodes. Ammad On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> wrote:
> Hello Ammad, > > I actually first used that one and it was also getting stuck. > > I will try this one again and update you with the Logs though. > > > Regards > > Tony Karera > > > > > On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> > wrote: > >> It seems from the logs that you are using fedora atomic. Can you >> try with FCOS 32 image. >> >> >> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >> >> Ammad >> >> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> >> wrote: >> >>> Hello Sir, >>> >>> Attached is the Log file >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> >>> wrote: >>> >>>> Hi Karera, >>>> >>>> Can you share us the full log file. >>>> >>>> Ammad >>>> >>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com> >>>> wrote: >>>> >>>>> Hello Guys, >>>>> >>>>> Thanks a lot for the help but unfortunately I dont see much >>>>> information in the log file indicating a failure apart from the log that >>>>> keeps appearing. >>>>> >>>>> <image.png> >>>>> >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>> mnaser@vexxhost.com> wrote: >>>>> >>>>>> Also check out /var/log/cloud-init.log :) >>>>>> >>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>> syedammad83@gmail.com> wrote: >>>>>> > >>>>>> > Then check journalctl -xe or status of heat agent service >>>>>> status. >>>>>> > >>>>>> > >>>>>> > Ammad >>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >> >>>>>> >> Hello Ammad, >>>>>> >> >>>>>> >> There is no directory or log relevant to heat in the >>>>>> /var/log directory >>>>>> >> >>>>>> >> Regards >>>>>> >> >>>>>> >> Tony Karera >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>> syedammad83@gmail.com> wrote: >>>>>> >>> >>>>>> >>> Hi Karera, >>>>>> >>> >>>>>> >>> Login to master node and check the logs of heat agent in >>>>>> var log. There must be something the cluster is stucking somewhere in >>>>>> creating. >>>>>> >>> >>>>>> >>> Ammad >>>>>> >>> >>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>> >>>>>> >>>> Hello Ammad, >>>>>> >>>> >>>>>> >>>> I had done as explained and it worked upto a certain >>>>>> point. The master node was created but the cluster remained in Creation in >>>>>> progress for over an hour and failed with error below >>>>>> >>>> >>>>>> >>>> Stack Faults >>>>>> >>>> as follows: >>>>>> >>>> default-master >>>>>> >>>> Timed out >>>>>> >>>> default-worker >>>>>> >>>> Timed out >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> Regards >>>>>> >>>> >>>>>> >>>> Tony Karera >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>> syedammad83@gmail.com> wrote: >>>>>> >>>>> >>>>>> >>>>> Hi Tony, >>>>>> >>>>> >>>>>> >>>>> You can try by creating your private vxlan network prior >>>>>> to deployment of cluster and explicitly create your cluster in vxlan >>>>>> network. >>>>>> >>>>> >>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>> >>>>> >>>>>> >>>>> You can specify above while creating a cluster. >>>>>> >>>>> >>>>>> >>>>> Ammad >>>>>> >>>>> >>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hello MOhamed, >>>>>> >>>>>> >>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>> internal networks. >>>>>> >>>>>> >>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>> >>>>>> Regards >>>>>> >>>>>> >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>> feilong@catalyst.net.nz> wrote: >>>>>> >>>>>>> >>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is >>>>>> well maintained. I didn't see any interest in the last 4 years since I >>>>>> involved in the Magnum project. If there is no specific reason, I would >>>>>> suggest go for k8s. >>>>>> >>>>>>> >>>>>> >>>>>>> >>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>> >>>>>>> >>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>> >>>>>>> >>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at >>>>>> this point. I believe most Magnum users are using it for Kubernetes only. >>>>>> >>>>>>> >>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>>>>>> >>>>>> >>>>>>>> Hello Naser, >>>>>> >>>>>>>> >>>>>> >>>>>>>> Please check below. >>>>>> >>>>>>>> >>>>>> >>>>>>>> openstack coe cluster template create >>>>>> swarm-cluster-template1 \ >>>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>>> >>>>>>>> --external-network External_1700\ >>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>> >>>>>>>> --flavor m1.small \ >>>>>> >>>>>>>> --coe swarm >>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>> >>>>>>>> --cluster-template >>>>>> swarm-cluster-template \ >>>>>> >>>>>>>> --master-count 1 \ >>>>>> >>>>>>>> --node-count 2 \ >>>>>> >>>>>>>> --keypair Newkey >>>>>> >>>>>>>> >>>>>> >>>>>>>> Regards >>>>>> >>>>>>>> >>>>>> >>>>>>>> Tony Karera >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>> mnaser@vexxhost.com> wrote: >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> What does your cluster template and cluster create >>>>>> command look like? >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Hello Wang, >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>> template or the cluster itself. >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Regards >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> Tony Karera >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>> feilong@catalyst.net.nz> wrote: >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Hi Karera, >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Hello Team, >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled >>>>>> Magum, however when I create a cluster I get the error below. >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Status Reason >>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, >>>>>> I disabled load balancer while creating the cluster. >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Regards >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Tony Karera >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> -- >>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>> >>>>>>>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>> >>>>>>>>>>> Head of Research & Development >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>> >>>>>>>>>>> Aotearoa's own >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>> Zealand >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>>> the named recipients only. >>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>> copyright information. If you are >>>>>> >>>>>>>>>>> not the named recipient, any use, reliance upon, >>>>>> disclosure or copying of this >>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If you >>>>>> have received this email in >>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 >>>>>> 6348. >>>>>> >>>>>>>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> -- >>>>>> >>>>>>>>> Mohammed Naser >>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>> >>>>>>> >>>>>> >>>>>>> -- >>>>>> >>>>>>> Mohammed Naser >>>>>> >>>>>>> VEXXHOST, Inc. >>>>>> >>>>>>> >>>>>> >>>>>>> -- >>>>>> >>>>>>> Cheers & Best regards, >>>>>> >>>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>> >>>>>>> Head of Research & Development >>>>>> >>>>>>> >>>>>> >>>>>>> Catalyst Cloud >>>>>> >>>>>>> Aotearoa's own >>>>>> >>>>>>> >>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand >>>>>> >>>>>>> >>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >>>>>> named recipients only. >>>>>> >>>>>>> It may contain privileged, confidential or copyright >>>>>> information. If you are >>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>> disclosure or copying of this >>>>>> >>>>>>> email or its attachments is unauthorised. If you have >>>>>> received this email in >>>>>> >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>> >>>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> -- >>>>>> >>>>> Regards, >>>>>> >>>>> >>>>>> >>>>> >>>>>> >>>>> Syed Ammad Ali >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> -- >>>>>> >>> Regards, >>>>>> >>> >>>>>> >>> >>>>>> >>> Syed Ammad Ali >>>>>> > >>>>>> > -- >>>>>> > Regards, >>>>>> > >>>>>> > >>>>>> > Syed Ammad Ali >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Mohammed Naser >>>>>> VEXXHOST, Inc. >>>>>> >>>>> >>>> >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali >>>> >>> >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> >
-- Regards,
Syed Ammad Ali
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
<6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards, Syed Ammad Ali
Dear Ammad, Below is the output of podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]# Regards Tony Karera On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> wrote:
> Hello Ammad, > > I have deployed using the given image but I think there is an issue > with keystone as per the screen shot below when I checked the master node's > heat-container-agent status > > <image.png> > > Regards > > Tony Karera > > > > > On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> > wrote: > >> Hello Ammad, >> >> I actually first used that one and it was also getting stuck. >> >> I will try this one again and update you with the Logs though. >> >> >> Regards >> >> Tony Karera >> >> >> >> >> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> >> wrote: >> >>> It seems from the logs that you are using fedora atomic. Can you >>> try with FCOS 32 image. >>> >>> >>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >>> >>> Ammad >>> >>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com> >>> wrote: >>> >>>> Hello Sir, >>>> >>>> Attached is the Log file >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com> >>>> wrote: >>>> >>>>> Hi Karera, >>>>> >>>>> Can you share us the full log file. >>>>> >>>>> Ammad >>>>> >>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>>>> Hello Guys, >>>>>> >>>>>> Thanks a lot for the help but unfortunately I dont see much >>>>>> information in the log file indicating a failure apart from the log that >>>>>> keeps appearing. >>>>>> >>>>>> <image.png> >>>>>> >>>>>> >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>>> mnaser@vexxhost.com> wrote: >>>>>> >>>>>>> Also check out /var/log/cloud-init.log :) >>>>>>> >>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>>> syedammad83@gmail.com> wrote: >>>>>>> > >>>>>>> > Then check journalctl -xe or status of heat agent service >>>>>>> status. >>>>>>> > >>>>>>> > >>>>>>> > Ammad >>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>>> tonykarera@gmail.com> wrote: >>>>>>> >> >>>>>>> >> Hello Ammad, >>>>>>> >> >>>>>>> >> There is no directory or log relevant to heat in the >>>>>>> /var/log directory >>>>>>> >> >>>>>>> >> Regards >>>>>>> >> >>>>>>> >> Tony Karera >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>>> syedammad83@gmail.com> wrote: >>>>>>> >>> >>>>>>> >>> Hi Karera, >>>>>>> >>> >>>>>>> >>> Login to master node and check the logs of heat agent in >>>>>>> var log. There must be something the cluster is stucking somewhere in >>>>>>> creating. >>>>>>> >>> >>>>>>> >>> Ammad >>>>>>> >>> >>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>>> tonykarera@gmail.com> wrote: >>>>>>> >>>> >>>>>>> >>>> Hello Ammad, >>>>>>> >>>> >>>>>>> >>>> I had done as explained and it worked upto a certain >>>>>>> point. The master node was created but the cluster remained in Creation in >>>>>>> progress for over an hour and failed with error below >>>>>>> >>>> >>>>>>> >>>> Stack Faults >>>>>>> >>>> as follows: >>>>>>> >>>> default-master >>>>>>> >>>> Timed out >>>>>>> >>>> default-worker >>>>>>> >>>> Timed out >>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>>> Regards >>>>>>> >>>> >>>>>>> >>>> Tony Karera >>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>>> syedammad83@gmail.com> wrote: >>>>>>> >>>>> >>>>>>> >>>>> Hi Tony, >>>>>>> >>>>> >>>>>>> >>>>> You can try by creating your private vxlan network prior >>>>>>> to deployment of cluster and explicitly create your cluster in vxlan >>>>>>> network. >>>>>>> >>>>> >>>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>>> >>>>> >>>>>>> >>>>> You can specify above while creating a cluster. >>>>>>> >>>>> >>>>>>> >>>>> Ammad >>>>>>> >>>>> >>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>>> tonykarera@gmail.com> wrote: >>>>>>> >>>>>> >>>>>>> >>>>>> Hello MOhamed, >>>>>>> >>>>>> >>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>>> internal networks. >>>>>>> >>>>>> >>>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>>> >>>>>> Regards >>>>>>> >>>>>> >>>>>>> >>>>>> Tony Karera >>>>>>> >>>>>> >>>>>>> >>>>>> >>>>>>> >>>>>> >>>>>>> >>>>>> >>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver >>>>>>> is well maintained. I didn't see any interest in the last 4 years since I >>>>>>> involved in the Magnum project. If there is no specific reason, I would >>>>>>> suggest go for k8s. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>> >>>>>>> >>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at >>>>>>> this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>>> tonykarera@gmail.com> wrote: >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> Hello Naser, >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> Please check below. >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> openstack coe cluster template create >>>>>>> swarm-cluster-template1 \ >>>>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>>>> >>>>>>>> --external-network External_1700\ >>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>>> >>>>>>>> --flavor m1.small \ >>>>>>> >>>>>>>> --coe swarm >>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>> >>>>>>>> --cluster-template >>>>>>> swarm-cluster-template \ >>>>>>> >>>>>>>> --master-count 1 \ >>>>>>> >>>>>>>> --node-count 2 \ >>>>>>> >>>>>>>> --keypair Newkey >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> Regards >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> Tony Karera >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>>> mnaser@vexxhost.com> wrote: >>>>>>> >>>>>>>>> >>>>>>> >>>>>>>>> What does your cluster template and cluster create >>>>>>> command look like? >>>>>>> >>>>>>>>> >>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>>> tonykarera@gmail.com> wrote: >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> Hello Wang, >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>>> template or the cluster itself. >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> Regards >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> Tony Karera >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> Hello Team, >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and >>>>>>> enabled Magum, however when I create a cluster I get the error below. >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> Status Reason >>>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, >>>>>>> I disabled load balancer while creating the cluster. >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> Regards >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> Tony Karera >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> -- >>>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>>> >>>>>>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>> >>>>>>>>>>> Head of Research & Development >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>> >>>>>>>>>>> Aotearoa's own >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>> Zealand >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>>>> the named recipients only. >>>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>>> copyright information. If you are >>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance upon, >>>>>>> disclosure or copying of this >>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If you >>>>>>> have received this email in >>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 >>>>>>> 6348. >>>>>>> >>>>>>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> >>>>>>>>> >>>>>>> >>>>>>>>> -- >>>>>>> >>>>>>>>> Mohammed Naser >>>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> Mohammed Naser >>>>>>> >>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> Cheers & Best regards, >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>> >>>>>>> Head of Research & Development >>>>>>> >>>>>>> >>>>>>> >>>>>>> Catalyst Cloud >>>>>>> >>>>>>> Aotearoa's own >>>>>>> >>>>>>> >>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>> Zealand >>>>>>> >>>>>>> >>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the >>>>>>> named recipients only. >>>>>>> >>>>>>> It may contain privileged, confidential or copyright >>>>>>> information. If you are >>>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>>> disclosure or copying of this >>>>>>> >>>>>>> email or its attachments is unauthorised. If you have >>>>>>> received this email in >>>>>>> >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> >>>>> >>>>>>> >>>>> >>>>>>> >>>>> >>>>>>> >>>>> -- >>>>>>> >>>>> Regards, >>>>>>> >>>>> >>>>>>> >>>>> >>>>>>> >>>>> Syed Ammad Ali >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> -- >>>>>>> >>> Regards, >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> Syed Ammad Ali >>>>>>> > >>>>>>> > -- >>>>>>> > Regards, >>>>>>> > >>>>>>> > >>>>>>> > Syed Ammad Ali >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Mohammed Naser >>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>>>> >>>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali >>> >>
-- Regards,
Syed Ammad Ali
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
<6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
Your hyperkube services are not started. You need to check hyperkube services. Ammad On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> wrote:
> Yes, keystone, Heat, Barbicane and magnum public endpoints must be > reachable from master and worker nodes. > > You can use below guide for the reference as well. > > > https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11 > > Ammad > > On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> > wrote: > >> Hello Ammad, >> >> I have deployed using the given image but I think there is an issue >> with keystone as per the screen shot below when I checked the master node's >> heat-container-agent status >> >> <image.png> >> >> Regards >> >> Tony Karera >> >> >> >> >> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> >> wrote: >> >>> Hello Ammad, >>> >>> I actually first used that one and it was also getting stuck. >>> >>> I will try this one again and update you with the Logs though. >>> >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> >>> wrote: >>> >>>> It seems from the logs that you are using fedora atomic. Can you >>>> try with FCOS 32 image. >>>> >>>> >>>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >>>> >>>> Ammad >>>> >>>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony < >>>> tonykarera@gmail.com> wrote: >>>> >>>>> Hello Sir, >>>>> >>>>> Attached is the Log file >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed < >>>>> syedammad83@gmail.com> wrote: >>>>> >>>>>> Hi Karera, >>>>>> >>>>>> Can you share us the full log file. >>>>>> >>>>>> Ammad >>>>>> >>>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>>>>> Hello Guys, >>>>>>> >>>>>>> Thanks a lot for the help but unfortunately I dont see much >>>>>>> information in the log file indicating a failure apart from the log that >>>>>>> keeps appearing. >>>>>>> >>>>>>> <image.png> >>>>>>> >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Tony Karera >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>>>> mnaser@vexxhost.com> wrote: >>>>>>> >>>>>>>> Also check out /var/log/cloud-init.log :) >>>>>>>> >>>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>> > >>>>>>>> > Then check journalctl -xe or status of heat agent service >>>>>>>> status. >>>>>>>> > >>>>>>>> > >>>>>>>> > Ammad >>>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >> >>>>>>>> >> Hello Ammad, >>>>>>>> >> >>>>>>>> >> There is no directory or log relevant to heat in the >>>>>>>> /var/log directory >>>>>>>> >> >>>>>>>> >> Regards >>>>>>>> >> >>>>>>>> >> Tony Karera >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>> >>> >>>>>>>> >>> Hi Karera, >>>>>>>> >>> >>>>>>>> >>> Login to master node and check the logs of heat agent in >>>>>>>> var log. There must be something the cluster is stucking somewhere in >>>>>>>> creating. >>>>>>>> >>> >>>>>>>> >>> Ammad >>>>>>>> >>> >>>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >>>> >>>>>>>> >>>> Hello Ammad, >>>>>>>> >>>> >>>>>>>> >>>> I had done as explained and it worked upto a certain >>>>>>>> point. The master node was created but the cluster remained in Creation in >>>>>>>> progress for over an hour and failed with error below >>>>>>>> >>>> >>>>>>>> >>>> Stack Faults >>>>>>>> >>>> as follows: >>>>>>>> >>>> default-master >>>>>>>> >>>> Timed out >>>>>>>> >>>> default-worker >>>>>>>> >>>> Timed out >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> Regards >>>>>>>> >>>> >>>>>>>> >>>> Tony Karera >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>> >>>>> >>>>>>>> >>>>> Hi Tony, >>>>>>>> >>>>> >>>>>>>> >>>>> You can try by creating your private vxlan network >>>>>>>> prior to deployment of cluster and explicitly create your cluster in vxlan >>>>>>>> network. >>>>>>>> >>>>> >>>>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>>>> >>>>> >>>>>>>> >>>>> You can specify above while creating a cluster. >>>>>>>> >>>>> >>>>>>>> >>>>> Ammad >>>>>>>> >>>>> >>>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >>>>>> >>>>>>>> >>>>>> Hello MOhamed, >>>>>>>> >>>>>> >>>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>>>> internal networks. >>>>>>>> >>>>>> >>>>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>>>> >>>>>> Regards >>>>>>>> >>>>>> >>>>>>>> >>>>>> Tony Karera >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> >>>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver >>>>>>>> is well maintained. I didn't see any interest in the last 4 years since I >>>>>>>> involved in the Magnum project. If there is no specific reason, I would >>>>>>>> suggest go for k8s. >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at >>>>>>>> this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hello Naser, >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Please check below. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> openstack coe cluster template create >>>>>>>> swarm-cluster-template1 \ >>>>>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>>>>> >>>>>>>> --external-network >>>>>>>> External_1700\ >>>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>>>> >>>>>>>> --flavor m1.small \ >>>>>>>> >>>>>>>> --coe swarm >>>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>> >>>>>>>> --cluster-template >>>>>>>> swarm-cluster-template \ >>>>>>>> >>>>>>>> --master-count 1 \ >>>>>>>> >>>>>>>> --node-count 2 \ >>>>>>>> >>>>>>>> --keypair Newkey >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>>> What does your cluster template and cluster create >>>>>>>> command look like? >>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>>>> template or the cluster itself. >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> Regards >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and >>>>>>>> enabled Magum, however when I create a cluster I get the error below. >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. >>>>>>>> Btw, I disabled load balancer while creating the cluster. >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> Regards >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> -- >>>>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>>>> >>>>>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>> >>>>>>>>>>> Head of Research & Development >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>> >>>>>>>>>>> Aotearoa's own >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>>> Zealand >>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended >>>>>>>> for the named recipients only. >>>>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>>>> copyright information. If you are >>>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance upon, >>>>>>>> disclosure or copying of this >>>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If you >>>>>>>> have received this email in >>>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 0832 >>>>>>>> 6348. >>>>>>>> >>>>>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>>> -- >>>>>>>> >>>>>>>>> Mohammed Naser >>>>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> -- >>>>>>>> >>>>>>> Mohammed Naser >>>>>>>> >>>>>>> VEXXHOST, Inc. >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> -- >>>>>>>> >>>>>>> Cheers & Best regards, >>>>>>>> >>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>> >>>>>>> Head of Research & Development >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> Catalyst Cloud >>>>>>>> >>>>>>> Aotearoa's own >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>>> Zealand >>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>> >>>>>>> >>>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>>>>> the named recipients only. >>>>>>>> >>>>>>> It may contain privileged, confidential or copyright >>>>>>>> information. If you are >>>>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>>>> disclosure or copying of this >>>>>>>> >>>>>>> email or its attachments is unauthorised. If you have >>>>>>>> received this email in >>>>>>>> >>>>>>> error, please reply via email or call +64 21 0832 >>>>>>>> 6348. >>>>>>>> >>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> >>>>> >>>>>>>> >>>>> >>>>>>>> >>>>> >>>>>>>> >>>>> -- >>>>>>>> >>>>> Regards, >>>>>>>> >>>>> >>>>>>>> >>>>> >>>>>>>> >>>>> Syed Ammad Ali >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> -- >>>>>>>> >>> Regards, >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> Syed Ammad Ali >>>>>>>> > >>>>>>>> > -- >>>>>>>> > Regards, >>>>>>>> > >>>>>>>> > >>>>>>>> > Syed Ammad Ali >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Mohammed Naser >>>>>>>> VEXXHOST, Inc. >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Regards, >>>>>> >>>>>> >>>>>> Syed Ammad Ali >>>>>> >>>>> >>>> >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali >>>> >>> > > -- > Regards, > > > Syed Ammad Ali >
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
<6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
Dear Ammad, Sorry to bother you again but I have failed to get the right command to use to check. Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below. Regards Tony Karera On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
> I’d check the logs under /var/log/heat-config. > > Sent from my iPhone > > On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote: > > > DeaR Ammad, > > I was able to make the communication work and the Worker nodes were > created as well but the cluster failed. > > I logged in to the master node and there was no error but below are > the error when I run systemctl status heat-container-agent on the worker > noed. > > Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: > /var/lib/os-collect-config/local-data not found. Skipping > Regards > > Tony Karera > > > > > On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> > wrote: > >> Yes, keystone, Heat, Barbicane and magnum public endpoints must be >> reachable from master and worker nodes. >> >> You can use below guide for the reference as well. >> >> >> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11 >> >> Ammad >> >> On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> >> wrote: >> >>> Hello Ammad, >>> >>> I have deployed using the given image but I think there is an >>> issue with keystone as per the screen shot below when I checked the master >>> node's heat-container-agent status >>> >>> <image.png> >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> >>> wrote: >>> >>>> Hello Ammad, >>>> >>>> I actually first used that one and it was also getting stuck. >>>> >>>> I will try this one again and update you with the Logs though. >>>> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com> >>>> wrote: >>>> >>>>> It seems from the logs that you are using fedora atomic. Can you >>>>> try with FCOS 32 image. >>>>> >>>>> >>>>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >>>>> >>>>> Ammad >>>>> >>>>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>>>> Hello Sir, >>>>>> >>>>>> Attached is the Log file >>>>>> >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed < >>>>>> syedammad83@gmail.com> wrote: >>>>>> >>>>>>> Hi Karera, >>>>>>> >>>>>>> Can you share us the full log file. >>>>>>> >>>>>>> Ammad >>>>>>> >>>>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony < >>>>>>> tonykarera@gmail.com> wrote: >>>>>>> >>>>>>>> Hello Guys, >>>>>>>> >>>>>>>> Thanks a lot for the help but unfortunately I dont see much >>>>>>>> information in the log file indicating a failure apart from the log that >>>>>>>> keeps appearing. >>>>>>>> >>>>>>>> <image.png> >>>>>>>> >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>> >>>>>>>>> Also check out /var/log/cloud-init.log :) >>>>>>>>> >>>>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>> > >>>>>>>>> > Then check journalctl -xe or status of heat agent service >>>>>>>>> status. >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > Ammad >>>>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>> >> >>>>>>>>> >> Hello Ammad, >>>>>>>>> >> >>>>>>>>> >> There is no directory or log relevant to heat in the >>>>>>>>> /var/log directory >>>>>>>>> >> >>>>>>>>> >> Regards >>>>>>>>> >> >>>>>>>>> >> Tony Karera >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>> >>> >>>>>>>>> >>> Hi Karera, >>>>>>>>> >>> >>>>>>>>> >>> Login to master node and check the logs of heat agent in >>>>>>>>> var log. There must be something the cluster is stucking somewhere in >>>>>>>>> creating. >>>>>>>>> >>> >>>>>>>>> >>> Ammad >>>>>>>>> >>> >>>>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>> >>>> >>>>>>>>> >>>> Hello Ammad, >>>>>>>>> >>>> >>>>>>>>> >>>> I had done as explained and it worked upto a certain >>>>>>>>> point. The master node was created but the cluster remained in Creation in >>>>>>>>> progress for over an hour and failed with error below >>>>>>>>> >>>> >>>>>>>>> >>>> Stack Faults >>>>>>>>> >>>> as follows: >>>>>>>>> >>>> default-master >>>>>>>>> >>>> Timed out >>>>>>>>> >>>> default-worker >>>>>>>>> >>>> Timed out >>>>>>>>> >>>> >>>>>>>>> >>>> >>>>>>>>> >>>> Regards >>>>>>>>> >>>> >>>>>>>>> >>>> Tony Karera >>>>>>>>> >>>> >>>>>>>>> >>>> >>>>>>>>> >>>> >>>>>>>>> >>>> >>>>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>> >>>>> >>>>>>>>> >>>>> Hi Tony, >>>>>>>>> >>>>> >>>>>>>>> >>>>> You can try by creating your private vxlan network >>>>>>>>> prior to deployment of cluster and explicitly create your cluster in vxlan >>>>>>>>> network. >>>>>>>>> >>>>> >>>>>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>>>>> >>>>> >>>>>>>>> >>>>> You can specify above while creating a cluster. >>>>>>>>> >>>>> >>>>>>>>> >>>>> Ammad >>>>>>>>> >>>>> >>>>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> Hello MOhamed, >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>>>>> internal networks. >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>>>>> >>>>>> Regards >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> Tony Karera >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> >>>>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver >>>>>>>>> is well maintained. I didn't see any interest in the last 4 years since I >>>>>>>>> involved in the Magnum project. If there is no specific reason, I would >>>>>>>>> suggest go for k8s. >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at >>>>>>>>> this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> Hello Naser, >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> Please check below. >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> openstack coe cluster template create >>>>>>>>> swarm-cluster-template1 \ >>>>>>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>>>>>> >>>>>>>> --external-network >>>>>>>>> External_1700\ >>>>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>>>>> >>>>>>>> --flavor m1.small \ >>>>>>>>> >>>>>>>> --coe swarm >>>>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>>> >>>>>>>> --cluster-template >>>>>>>>> swarm-cluster-template \ >>>>>>>>> >>>>>>>> --master-count 1 \ >>>>>>>>> >>>>>>>> --node-count 2 \ >>>>>>>>> >>>>>>>> --keypair Newkey >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> Regards >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> Tony Karera >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> What does your cluster template and cluster create >>>>>>>>> command look like? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>>>>> template or the cluster itself. >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> Regards >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and >>>>>>>>> enabled Magum, however when I create a cluster I get the error below. >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. >>>>>>>>> Btw, I disabled load balancer while creating the cluster. >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> -- >>>>>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>>>>> >>>>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>> >>>>>>>>>>> Head of Research & Development >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>> >>>>>>>>>>> Aotearoa's own >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, >>>>>>>>> New Zealand >>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>> >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended >>>>>>>>> for the named recipients only. >>>>>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>>>>> copyright information. If you are >>>>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance upon, >>>>>>>>> disclosure or copying of this >>>>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If you >>>>>>>>> have received this email in >>>>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 >>>>>>>>> 0832 6348. >>>>>>>>> >>>>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> >>>>>>>>> Mohammed Naser >>>>>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> -- >>>>>>>>> >>>>>>> Mohammed Naser >>>>>>>>> >>>>>>> VEXXHOST, Inc. >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> -- >>>>>>>>> >>>>>>> Cheers & Best regards, >>>>>>>>> >>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>> >>>>>>> Head of Research & Development >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> Catalyst Cloud >>>>>>>>> >>>>>>> Aotearoa's own >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>>>> Zealand >>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>> >>>>>>> >>>>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>>>>>> the named recipients only. >>>>>>>>> >>>>>>> It may contain privileged, confidential or copyright >>>>>>>>> information. If you are >>>>>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>>>>> disclosure or copying of this >>>>>>>>> >>>>>>> email or its attachments is unauthorised. If you >>>>>>>>> have received this email in >>>>>>>>> >>>>>>> error, please reply via email or call +64 21 0832 >>>>>>>>> 6348. >>>>>>>>> >>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>> >>>>>>>>> >>>>> >>>>>>>>> >>>>> >>>>>>>>> >>>>> -- >>>>>>>>> >>>>> Regards, >>>>>>>>> >>>>> >>>>>>>>> >>>>> >>>>>>>>> >>>>> Syed Ammad Ali >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> >>> -- >>>>>>>>> >>> Regards, >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> >>> Syed Ammad Ali >>>>>>>>> > >>>>>>>>> > -- >>>>>>>>> > Regards, >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > Syed Ammad Ali >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Mohammed Naser >>>>>>>>> VEXXHOST, Inc. >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Regards, >>>>>>> >>>>>>> >>>>>>> Syed Ammad Ali >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>>>> >>>> >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> > <29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
<6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
What version of fedora coreos are you using? On Tue, Aug 31, 2021 at 4:52 PM Karera Tony <tonykarera@gmail.com> wrote:
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use to check.
Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
> Dear Sir, > > You are right. > > I am getting this error > > kubectl get --raw=/healthz > The connection to the server localhost:8080 was refused - did you > specify the right host or port? > > > Regards > > Tony Karera > > > > > On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> > wrote: > >> I’d check the logs under /var/log/heat-config. >> >> Sent from my iPhone >> >> On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote: >> >> >> DeaR Ammad, >> >> I was able to make the communication work and the Worker nodes were >> created as well but the cluster failed. >> >> I logged in to the master node and there was no error but below are >> the error when I run systemctl status heat-container-agent on the worker >> noed. >> >> Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Regards >> >> Tony Karera >> >> >> >> >> On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> >> wrote: >> >>> Yes, keystone, Heat, Barbicane and magnum public endpoints must be >>> reachable from master and worker nodes. >>> >>> You can use below guide for the reference as well. >>> >>> >>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11 >>> >>> Ammad >>> >>> On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> >>> wrote: >>> >>>> Hello Ammad, >>>> >>>> I have deployed using the given image but I think there is an >>>> issue with keystone as per the screen shot below when I checked the master >>>> node's heat-container-agent status >>>> >>>> <image.png> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> >>>> wrote: >>>> >>>>> Hello Ammad, >>>>> >>>>> I actually first used that one and it was also getting stuck. >>>>> >>>>> I will try this one again and update you with the Logs though. >>>>> >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed < >>>>> syedammad83@gmail.com> wrote: >>>>> >>>>>> It seems from the logs that you are using fedora atomic. Can >>>>>> you try with FCOS 32 image. >>>>>> >>>>>> >>>>>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >>>>>> >>>>>> Ammad >>>>>> >>>>>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>>>>> Hello Sir, >>>>>>> >>>>>>> Attached is the Log file >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Tony Karera >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed < >>>>>>> syedammad83@gmail.com> wrote: >>>>>>> >>>>>>>> Hi Karera, >>>>>>>> >>>>>>>> Can you share us the full log file. >>>>>>>> >>>>>>>> Ammad >>>>>>>> >>>>>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >>>>>>>>> Hello Guys, >>>>>>>>> >>>>>>>>> Thanks a lot for the help but unfortunately I dont see much >>>>>>>>> information in the log file indicating a failure apart from the log that >>>>>>>>> keeps appearing. >>>>>>>>> >>>>>>>>> <image.png> >>>>>>>>> >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Tony Karera >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>> >>>>>>>>>> Also check out /var/log/cloud-init.log :) >>>>>>>>>> >>>>>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>> > >>>>>>>>>> > Then check journalctl -xe or status of heat agent service >>>>>>>>>> status. >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > Ammad >>>>>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >> >>>>>>>>>> >> Hello Ammad, >>>>>>>>>> >> >>>>>>>>>> >> There is no directory or log relevant to heat in the >>>>>>>>>> /var/log directory >>>>>>>>>> >> >>>>>>>>>> >> Regards >>>>>>>>>> >> >>>>>>>>>> >> Tony Karera >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>> >>> >>>>>>>>>> >>> Hi Karera, >>>>>>>>>> >>> >>>>>>>>>> >>> Login to master node and check the logs of heat agent >>>>>>>>>> in var log. There must be something the cluster is stucking somewhere in >>>>>>>>>> creating. >>>>>>>>>> >>> >>>>>>>>>> >>> Ammad >>>>>>>>>> >>> >>>>>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>> >>>>>>>>>> >>>> Hello Ammad, >>>>>>>>>> >>>> >>>>>>>>>> >>>> I had done as explained and it worked upto a certain >>>>>>>>>> point. The master node was created but the cluster remained in Creation in >>>>>>>>>> progress for over an hour and failed with error below >>>>>>>>>> >>>> >>>>>>>>>> >>>> Stack Faults >>>>>>>>>> >>>> as follows: >>>>>>>>>> >>>> default-master >>>>>>>>>> >>>> Timed out >>>>>>>>>> >>>> default-worker >>>>>>>>>> >>>> Timed out >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> Regards >>>>>>>>>> >>>> >>>>>>>>>> >>>> Tony Karera >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> Hi Tony, >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> You can try by creating your private vxlan network >>>>>>>>>> prior to deployment of cluster and explicitly create your cluster in vxlan >>>>>>>>>> network. >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> You can specify above while creating a cluster. >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> Ammad >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> Hello MOhamed, >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>>>>>> internal networks. >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>>>>>> >>>>>> Regards >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> Tony Karera >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that >>>>>>>>>> driver is well maintained. I didn't see any interest in the last 4 years >>>>>>>>>> since I involved in the Magnum project. If there is no specific reason, I >>>>>>>>>> would suggest go for k8s. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at >>>>>>>>>> this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Hello Naser, >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Please check below. >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> openstack coe cluster template create >>>>>>>>>> swarm-cluster-template1 \ >>>>>>>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>>>>>>> >>>>>>>> --external-network >>>>>>>>>> External_1700\ >>>>>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>>>>>> >>>>>>>> --flavor m1.small \ >>>>>>>>>> >>>>>>>> --coe swarm >>>>>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>>>> >>>>>>>> --cluster-template >>>>>>>>>> swarm-cluster-template \ >>>>>>>>>> >>>>>>>> --master-count 1 \ >>>>>>>>>> >>>>>>>> --node-count 2 \ >>>>>>>>>> >>>>>>>> --keypair Newkey >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Regards >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> What does your cluster template and cluster >>>>>>>>>> create command look like? >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>>>>>> template or the cluster itself. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and >>>>>>>>>> enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. >>>>>>>>>> Btw, I disabled load balancer while creating the cluster. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>> >>>>>>>>>>> Head of Research & Development >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>> >>>>>>>>>>> Aotearoa's own >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, >>>>>>>>>> New Zealand >>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended >>>>>>>>>> for the named recipients only. >>>>>>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>>>>>> copyright information. If you are >>>>>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance >>>>>>>>>> upon, disclosure or copying of this >>>>>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If >>>>>>>>>> you have received this email in >>>>>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 >>>>>>>>>> 0832 6348. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> -- >>>>>>>>>> >>>>>>>>> Mohammed Naser >>>>>>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> -- >>>>>>>>>> >>>>>>> Mohammed Naser >>>>>>>>>> >>>>>>> VEXXHOST, Inc. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> -- >>>>>>>>>> >>>>>>> Cheers & Best regards, >>>>>>>>>> >>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>> >>>>>>> Head of Research & Development >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Catalyst Cloud >>>>>>>>>> >>>>>>> Aotearoa's own >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>>>>> Zealand >>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>>>>>>> the named recipients only. >>>>>>>>>> >>>>>>> It may contain privileged, confidential or >>>>>>>>>> copyright information. If you are >>>>>>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>>>>>> disclosure or copying of this >>>>>>>>>> >>>>>>> email or its attachments is unauthorised. If you >>>>>>>>>> have received this email in >>>>>>>>>> >>>>>>> error, please reply via email or call +64 21 0832 >>>>>>>>>> 6348. >>>>>>>>>> >>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> -- >>>>>>>>>> >>>>> Regards, >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> Syed Ammad Ali >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> -- >>>>>>>>>> >>> Regards, >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> Syed Ammad Ali >>>>>>>>>> > >>>>>>>>>> > -- >>>>>>>>>> > Regards, >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > Syed Ammad Ali >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Mohammed Naser >>>>>>>>>> VEXXHOST, Inc. >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Regards, >>>>>>>> >>>>>>>> >>>>>>>> Syed Ammad Ali >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Regards, >>>>>> >>>>>> >>>>>> Syed Ammad Ali >>>>>> >>>>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali >>> >> <29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
<6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Ionut Biru - https://fleio.com
Hi Karea, Given you can see heat-container-agent container from podman which means you should be able to see logs from below path: [root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# ls c64e6ac2-db7e-4786-a387-1d45359812b8-k8s-100-eh6s5l6d73ie-kube_cluster_config-uxpsylgnayjy.log fa1f6247-51a8-4e70-befa-cbc61ee99e59-k8s-100-eh6s5l6d73ie-kube_masters-kmi423lgbjw3-0-oii7uzemq7aj-master_config-dhfam54i456j.log [root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# pwd /var/log/heat-config/heat-config-script If you can not see the path and the log, then it means the heat-container-agent didn't work well. You need to check the service status by systemctl command and check the log by journalctl. From there, you should be able to see why the cluster failed. On 1/09/21 1:41 am, Karera Tony wrote:
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use to check.
Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 <http://docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1> /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com <mailto:bharat@stackhpc.com>> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com <mailto:bharat@stackhpc.com>> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
<image.png>
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: > > Then check journalctl -xe or status of heat agent service status. > > > Ammad > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >> >> Hello Ammad, >> >> There is no directory or log relevant to heat in the /var/log directory >> >> Regards >> >> Tony Karera >> >> >> >> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: >>> >>> Hi Karera, >>> >>> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. >>> >>> Ammad >>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>> >>>> Hello Ammad, >>>> >>>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >>>> >>>> Stack Faults >>>> as follows: >>>> default-master >>>> Timed out >>>> default-worker >>>> Timed out >>>> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: >>>>> >>>>> Hi Tony, >>>>> >>>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>> >>>>> You can specify above while creating a cluster. >>>>> >>>>> Ammad >>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>> >>>>>> Hello MOhamed, >>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>>>>> >>>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: >>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. >>>>>>> >>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>>>> >>>>>>>> Hello Naser, >>>>>>>> >>>>>>>> Please check below. >>>>>>>> >>>>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>>>>> --image fedora-atomic-latest \ >>>>>>>> --external-network External_1700\ >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>> --master-flavor m1.small \ >>>>>>>> --flavor m1.small \ >>>>>>>> --coe swarm >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>> --cluster-template swarm-cluster-template \ >>>>>>>> --master-count 1 \ >>>>>>>> --node-count 2 \ >>>>>>>> --keypair Newkey >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote: >>>>>>>>> >>>>>>>>> What does your cluster template and cluster create command look like? >>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>> Head of Research & Development >>>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>>> Aotearoa's own >>>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Mohammed Naser >>>>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Mohammed Naser >>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>> Head of Research & Development >>>>>>> >>>>>>> Catalyst Cloud >>>>>>> Aotearoa's own >>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>> ------------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>> >>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali > > -- > Regards, > > > Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log> <6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Hey Feilong, Thanks a lot. The services are fine and indeed the log files are there in the directory [ /var/log/heat-config/heat-config-script] After checking, the master log is fine but the cluster log has this error below as I had mentioned earlier Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']' Regards Tony Karera On Tue, Aug 31, 2021 at 9:52 PM feilong <feilong@catalyst.net.nz> wrote:
Hi Karea,
Given you can see heat-container-agent container from podman which means you should be able to see logs from below path:
[root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# ls
c64e6ac2-db7e-4786-a387-1d45359812b8-k8s-100-eh6s5l6d73ie-kube_cluster_config-uxpsylgnayjy.log
fa1f6247-51a8-4e70-befa-cbc61ee99e59-k8s-100-eh6s5l6d73ie-kube_masters-kmi423lgbjw3-0-oii7uzemq7aj-master_config-dhfam54i456j.log [root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# pwd /var/log/heat-config/heat-config-script
If you can not see the path and the log, then it means the heat-container-agent didn't work well. You need to check the service status by systemctl command and check the log by journalctl. From there, you should be able to see why the cluster failed.
On 1/09/21 1:41 am, Karera Tony wrote:
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use to check.
Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> wrote:
> Dear Sir, > > You are right. > > I am getting this error > > kubectl get --raw=/healthz > The connection to the server localhost:8080 was refused - did you > specify the right host or port? > > > Regards > > Tony Karera > > > > > On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> > wrote: > >> I’d check the logs under /var/log/heat-config. >> >> Sent from my iPhone >> >> On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> wrote: >> >> >> DeaR Ammad, >> >> I was able to make the communication work and the Worker nodes were >> created as well but the cluster failed. >> >> I logged in to the master node and there was no error but below are >> the error when I run systemctl status heat-container-agent on the worker >> noed. >> >> Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: >> /var/lib/os-collect-config/local-data not found. Skipping >> Regards >> >> Tony Karera >> >> >> >> >> On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> >> wrote: >> >>> Yes, keystone, Heat, Barbicane and magnum public endpoints must be >>> reachable from master and worker nodes. >>> >>> You can use below guide for the reference as well. >>> >>> >>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11 >>> >>> Ammad >>> >>> On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com> >>> wrote: >>> >>>> Hello Ammad, >>>> >>>> I have deployed using the given image but I think there is an >>>> issue with keystone as per the screen shot below when I checked the master >>>> node's heat-container-agent status >>>> >>>> <image.png> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com> >>>> wrote: >>>> >>>>> Hello Ammad, >>>>> >>>>> I actually first used that one and it was also getting stuck. >>>>> >>>>> I will try this one again and update you with the Logs though. >>>>> >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed < >>>>> syedammad83@gmail.com> wrote: >>>>> >>>>>> It seems from the logs that you are using fedora atomic. Can >>>>>> you try with FCOS 32 image. >>>>>> >>>>>> >>>>>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >>>>>> >>>>>> Ammad >>>>>> >>>>>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>>>>> Hello Sir, >>>>>>> >>>>>>> Attached is the Log file >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Tony Karera >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed < >>>>>>> syedammad83@gmail.com> wrote: >>>>>>> >>>>>>>> Hi Karera, >>>>>>>> >>>>>>>> Can you share us the full log file. >>>>>>>> >>>>>>>> Ammad >>>>>>>> >>>>>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >>>>>>>>> Hello Guys, >>>>>>>>> >>>>>>>>> Thanks a lot for the help but unfortunately I dont see much >>>>>>>>> information in the log file indicating a failure apart from the log that >>>>>>>>> keeps appearing. >>>>>>>>> >>>>>>>>> <image.png> >>>>>>>>> >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Tony Karera >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>> >>>>>>>>>> Also check out /var/log/cloud-init.log :) >>>>>>>>>> >>>>>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>> > >>>>>>>>>> > Then check journalctl -xe or status of heat agent service >>>>>>>>>> status. >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > Ammad >>>>>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >> >>>>>>>>>> >> Hello Ammad, >>>>>>>>>> >> >>>>>>>>>> >> There is no directory or log relevant to heat in the >>>>>>>>>> /var/log directory >>>>>>>>>> >> >>>>>>>>>> >> Regards >>>>>>>>>> >> >>>>>>>>>> >> Tony Karera >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>> >>> >>>>>>>>>> >>> Hi Karera, >>>>>>>>>> >>> >>>>>>>>>> >>> Login to master node and check the logs of heat agent >>>>>>>>>> in var log. There must be something the cluster is stucking somewhere in >>>>>>>>>> creating. >>>>>>>>>> >>> >>>>>>>>>> >>> Ammad >>>>>>>>>> >>> >>>>>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>> >>>>>>>>>> >>>> Hello Ammad, >>>>>>>>>> >>>> >>>>>>>>>> >>>> I had done as explained and it worked upto a certain >>>>>>>>>> point. The master node was created but the cluster remained in Creation in >>>>>>>>>> progress for over an hour and failed with error below >>>>>>>>>> >>>> >>>>>>>>>> >>>> Stack Faults >>>>>>>>>> >>>> as follows: >>>>>>>>>> >>>> default-master >>>>>>>>>> >>>> Timed out >>>>>>>>>> >>>> default-worker >>>>>>>>>> >>>> Timed out >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> Regards >>>>>>>>>> >>>> >>>>>>>>>> >>>> Tony Karera >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> Hi Tony, >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> You can try by creating your private vxlan network >>>>>>>>>> prior to deployment of cluster and explicitly create your cluster in vxlan >>>>>>>>>> network. >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> You can specify above while creating a cluster. >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> Ammad >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> Hello MOhamed, >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>>>>>> internal networks. >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>>>>>> >>>>>> Regards >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> Tony Karera >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> >>>>>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that >>>>>>>>>> driver is well maintained. I didn't see any interest in the last 4 years >>>>>>>>>> since I involved in the Magnum project. If there is no specific reason, I >>>>>>>>>> would suggest go for k8s. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at >>>>>>>>>> this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Hello Naser, >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Please check below. >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> openstack coe cluster template create >>>>>>>>>> swarm-cluster-template1 \ >>>>>>>>>> >>>>>>>> --image fedora-atomic-latest \ >>>>>>>>>> >>>>>>>> --external-network >>>>>>>>>> External_1700\ >>>>>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>>>>>> >>>>>>>> --flavor m1.small \ >>>>>>>>>> >>>>>>>> --coe swarm >>>>>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>>>> >>>>>>>> --cluster-template >>>>>>>>>> swarm-cluster-template \ >>>>>>>>>> >>>>>>>> --master-count 1 \ >>>>>>>>>> >>>>>>>> --node-count 2 \ >>>>>>>>>> >>>>>>>> --keypair Newkey >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Regards >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> What does your cluster template and cluster >>>>>>>>>> create command look like? >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>>>>>> template or the cluster itself. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and >>>>>>>>>> enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. >>>>>>>>>> Btw, I disabled load balancer while creating the cluster. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>> >>>>>>>>>>> Head of Research & Development >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>> >>>>>>>>>>> Aotearoa's own >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, >>>>>>>>>> New Zealand >>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended >>>>>>>>>> for the named recipients only. >>>>>>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>>>>>> copyright information. If you are >>>>>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance >>>>>>>>>> upon, disclosure or copying of this >>>>>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If >>>>>>>>>> you have received this email in >>>>>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 >>>>>>>>>> 0832 6348. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>> -- >>>>>>>>>> >>>>>>>>> Mohammed Naser >>>>>>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> -- >>>>>>>>>> >>>>>>> Mohammed Naser >>>>>>>>>> >>>>>>> VEXXHOST, Inc. >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> -- >>>>>>>>>> >>>>>>> Cheers & Best regards, >>>>>>>>>> >>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>> >>>>>>> Head of Research & Development >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Catalyst Cloud >>>>>>>>>> >>>>>>> Aotearoa's own >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>>>>> Zealand >>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>> >>>>>>> >>>>>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>>>>>>> the named recipients only. >>>>>>>>>> >>>>>>> It may contain privileged, confidential or >>>>>>>>>> copyright information. If you are >>>>>>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>>>>>> disclosure or copying of this >>>>>>>>>> >>>>>>> email or its attachments is unauthorised. If you >>>>>>>>>> have received this email in >>>>>>>>>> >>>>>>> error, please reply via email or call +64 21 0832 >>>>>>>>>> 6348. >>>>>>>>>> >>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> -- >>>>>>>>>> >>>>> Regards, >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> Syed Ammad Ali >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> -- >>>>>>>>>> >>> Regards, >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> Syed Ammad Ali >>>>>>>>>> > >>>>>>>>>> > -- >>>>>>>>>> > Regards, >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > Syed Ammad Ali >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Mohammed Naser >>>>>>>>>> VEXXHOST, Inc. >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Regards, >>>>>>>> >>>>>>>> >>>>>>>> Syed Ammad Ali >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Regards, >>>>>> >>>>>> >>>>>> Syed Ammad Ali >>>>>> >>>>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali >>> >> <29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
<6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Hi Karera, Can you please share all the log under /var/log/heat-config/heat-config-script ? Or you can jump in #openstack-containers channel on OFTC, I'm online now. On 1/09/21 1:51 pm, Karera Tony wrote:
Hey Feilong,
Thanks a lot.
The services are fine and indeed the log files are there in the directory [/var/log/heat-config/heat-config-script]
After checking, the master log is fine but the cluster log has this error below as I had mentioned earlier
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Tue, Aug 31, 2021 at 9:52 PM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote:
Hi Karea,
Given you can see heat-container-agent container from podman which means you should be able to see logs from below path:
[root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# ls c64e6ac2-db7e-4786-a387-1d45359812b8-k8s-100-eh6s5l6d73ie-kube_cluster_config-uxpsylgnayjy.log fa1f6247-51a8-4e70-befa-cbc61ee99e59-k8s-100-eh6s5l6d73ie-kube_masters-kmi423lgbjw3-0-oii7uzemq7aj-master_config-dhfam54i456j.log [root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# pwd /var/log/heat-config/heat-config-script
If you can not see the path and the log, then it means the heat-container-agent didn't work well. You need to check the service status by systemctl command and check the log by journalctl. From there, you should be able to see why the cluster failed.
On 1/09/21 1:41 am, Karera Tony wrote:
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use to check.
Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 <http://docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1> /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com <mailto:bharat@stackhpc.com>> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com <mailto:bharat@stackhpc.com>> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
<image.png>
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: > > Then check journalctl -xe or status of heat agent service status. > > > Ammad > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >> >> Hello Ammad, >> >> There is no directory or log relevant to heat in the /var/log directory >> >> Regards >> >> Tony Karera >> >> >> >> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: >>> >>> Hi Karera, >>> >>> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. >>> >>> Ammad >>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>> >>>> Hello Ammad, >>>> >>>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >>>> >>>> Stack Faults >>>> as follows: >>>> default-master >>>> Timed out >>>> default-worker >>>> Timed out >>>> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: >>>>> >>>>> Hi Tony, >>>>> >>>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>> >>>>> You can specify above while creating a cluster. >>>>> >>>>> Ammad >>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>> >>>>>> Hello MOhamed, >>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>>>>> >>>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: >>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. >>>>>>> >>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>>>> >>>>>>>> Hello Naser, >>>>>>>> >>>>>>>> Please check below. >>>>>>>> >>>>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>>>>> --image fedora-atomic-latest \ >>>>>>>> --external-network External_1700\ >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>> --master-flavor m1.small \ >>>>>>>> --flavor m1.small \ >>>>>>>> --coe swarm >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>> --cluster-template swarm-cluster-template \ >>>>>>>> --master-count 1 \ >>>>>>>> --node-count 2 \ >>>>>>>> --keypair Newkey >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote: >>>>>>>>> >>>>>>>>> What does your cluster template and cluster create command look like? >>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>> Head of Research & Development >>>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>>> Aotearoa's own >>>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Mohammed Naser >>>>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Mohammed Naser >>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>> Head of Research & Development >>>>>>> >>>>>>> Catalyst Cloud >>>>>>> Aotearoa's own >>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>> ------------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>> >>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali > > -- > Regards, > > > Syed Ammad Ali
-- Mohammed Naser VEXXHOST, Inc.
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log> <6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Hello Feilong, I would be happy to join. Can you please assist with the link . Thanks a lot Regards Tony Karera On Wed, Sep 1, 2021 at 4:04 AM feilong <feilong@catalyst.net.nz> wrote:
Hi Karera,
Can you please share all the log under /var/log/heat-config/heat-config-script ? Or you can jump in #openstack-containers channel on OFTC, I'm online now.
On 1/09/21 1:51 pm, Karera Tony wrote:
Hey Feilong,
Thanks a lot.
The services are fine and indeed the log files are there in the directory [ /var/log/heat-config/heat-config-script]
After checking, the master log is fine but the cluster log has this error below as I had mentioned earlier
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Tue, Aug 31, 2021 at 9:52 PM feilong <feilong@catalyst.net.nz> wrote:
Hi Karea,
Given you can see heat-container-agent container from podman which means you should be able to see logs from below path:
[root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# ls
c64e6ac2-db7e-4786-a387-1d45359812b8-k8s-100-eh6s5l6d73ie-kube_cluster_config-uxpsylgnayjy.log
fa1f6247-51a8-4e70-befa-cbc61ee99e59-k8s-100-eh6s5l6d73ie-kube_masters-kmi423lgbjw3-0-oii7uzemq7aj-master_config-dhfam54i456j.log [root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# pwd /var/log/heat-config/heat-config-script
If you can not see the path and the log, then it means the heat-container-agent didn't work well. You need to check the service status by systemctl command and check the log by journalctl. From there, you should be able to see why the cluster failed.
On 1/09/21 1:41 am, Karera Tony wrote:
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use to check.
Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> wrote:
> I assume these are from the master nodes? Can you share the logs > shortly after creation rather than when it times out? I think there is some > missing logs from the top. > > Sent from my iPhone > > On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote: > > > Hello Guys, > > Attached are the two logs from the > /var/log/heat-config/heat-config-script directory > Regards > > Tony Karera > > > > > On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> > wrote: > >> Dear Sir, >> >> You are right. >> >> I am getting this error >> >> kubectl get --raw=/healthz >> The connection to the server localhost:8080 was refused - did you >> specify the right host or port? >> >> >> Regards >> >> Tony Karera >> >> >> >> >> On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com> >> wrote: >> >>> I’d check the logs under /var/log/heat-config. >>> >>> Sent from my iPhone >>> >>> On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> >>> wrote: >>> >>> >>> DeaR Ammad, >>> >>> I was able to make the communication work and the Worker nodes >>> were created as well but the cluster failed. >>> >>> I logged in to the master node and there was no error but below >>> are the error when I run systemctl status heat-container-agent on the >>> worker noed. >>> >>> Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>> /var/lib/os-collect-config/local-data not found. Skipping >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com> >>> wrote: >>> >>>> Yes, keystone, Heat, Barbicane and magnum public endpoints must >>>> be reachable from master and worker nodes. >>>> >>>> You can use below guide for the reference as well. >>>> >>>> >>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11 >>>> >>>> Ammad >>>> >>>> On Wed, Aug 25, 2021 at 12:08 PM Karera Tony < >>>> tonykarera@gmail.com> wrote: >>>> >>>>> Hello Ammad, >>>>> >>>>> I have deployed using the given image but I think there is an >>>>> issue with keystone as per the screen shot below when I checked the master >>>>> node's heat-container-agent status >>>>> >>>>> <image.png> >>>>> >>>>> Regards >>>>> >>>>> Tony Karera >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>>>> Hello Ammad, >>>>>> >>>>>> I actually first used that one and it was also getting stuck. >>>>>> >>>>>> I will try this one again and update you with the Logs though. >>>>>> >>>>>> >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed < >>>>>> syedammad83@gmail.com> wrote: >>>>>> >>>>>>> It seems from the logs that you are using fedora atomic. Can >>>>>>> you try with FCOS 32 image. >>>>>>> >>>>>>> >>>>>>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >>>>>>> >>>>>>> Ammad >>>>>>> >>>>>>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony < >>>>>>> tonykarera@gmail.com> wrote: >>>>>>> >>>>>>>> Hello Sir, >>>>>>>> >>>>>>>> Attached is the Log file >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed < >>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Karera, >>>>>>>>> >>>>>>>>> Can you share us the full log file. >>>>>>>>> >>>>>>>>> Ammad >>>>>>>>> >>>>>>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony < >>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hello Guys, >>>>>>>>>> >>>>>>>>>> Thanks a lot for the help but unfortunately I dont see much >>>>>>>>>> information in the log file indicating a failure apart from the log that >>>>>>>>>> keeps appearing. >>>>>>>>>> >>>>>>>>>> <image.png> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>>> >>>>>>>>>>> Also check out /var/log/cloud-init.log :) >>>>>>>>>>> >>>>>>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>>> > >>>>>>>>>>> > Then check journalctl -xe or status of heat agent >>>>>>>>>>> service status. >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > Ammad >>>>>>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>> >> >>>>>>>>>>> >> Hello Ammad, >>>>>>>>>>> >> >>>>>>>>>>> >> There is no directory or log relevant to heat in the >>>>>>>>>>> /var/log directory >>>>>>>>>>> >> >>>>>>>>>>> >> Regards >>>>>>>>>>> >> >>>>>>>>>>> >> Tony Karera >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>>> >>> >>>>>>>>>>> >>> Hi Karera, >>>>>>>>>>> >>> >>>>>>>>>>> >>> Login to master node and check the logs of heat agent >>>>>>>>>>> in var log. There must be something the cluster is stucking somewhere in >>>>>>>>>>> creating. >>>>>>>>>>> >>> >>>>>>>>>>> >>> Ammad >>>>>>>>>>> >>> >>>>>>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> Hello Ammad, >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> I had done as explained and it worked upto a certain >>>>>>>>>>> point. The master node was created but the cluster remained in Creation in >>>>>>>>>>> progress for over an hour and failed with error below >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> Stack Faults >>>>>>>>>>> >>>> as follows: >>>>>>>>>>> >>>> default-master >>>>>>>>>>> >>>> Timed out >>>>>>>>>>> >>>> default-worker >>>>>>>>>>> >>>> Timed out >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> Regards >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> Tony Karera >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> Hi Tony, >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> You can try by creating your private vxlan network >>>>>>>>>>> prior to deployment of cluster and explicitly create your cluster in vxlan >>>>>>>>>>> network. >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> You can specify above while creating a cluster. >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> Ammad >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> Hello MOhamed, >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>>>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>>>>>>> internal networks. >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>>>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>>>>>>> >>>>>> Regards >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> Tony Karera >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> >>>>>>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that >>>>>>>>>>> driver is well maintained. I didn't see any interest in the last 4 years >>>>>>>>>>> since I involved in the Magnum project. If there is no specific reason, I >>>>>>>>>>> would suggest go for k8s. >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is >>>>>>>>>>> at this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> Hello Naser, >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> Please check below. >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> openstack coe cluster template create >>>>>>>>>>> swarm-cluster-template1 \ >>>>>>>>>>> >>>>>>>> --image fedora-atomic-latest >>>>>>>>>>> \ >>>>>>>>>>> >>>>>>>> --external-network >>>>>>>>>>> External_1700\ >>>>>>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>>>>>>> >>>>>>>> --flavor m1.small \ >>>>>>>>>>> >>>>>>>> --coe swarm >>>>>>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>>>>> >>>>>>>> --cluster-template >>>>>>>>>>> swarm-cluster-template \ >>>>>>>>>>> >>>>>>>> --master-count 1 \ >>>>>>>>>>> >>>>>>>> --node-count 2 \ >>>>>>>>>>> >>>>>>>> --keypair Newkey >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> Regards >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> >>>>>>>>> What does your cluster template and cluster >>>>>>>>>>> create command look like? >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>>>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>>>>>>> template or the cluster itself. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>>>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and >>>>>>>>>>> enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. >>>>>>>>>>> Btw, I disabled load balancer while creating the cluster. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>> >>>>>>>>>>> Head of Research & Development >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>>> >>>>>>>>>>> Aotearoa's own >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, >>>>>>>>>>> New Zealand >>>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended >>>>>>>>>>> for the named recipients only. >>>>>>>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>>>>>>> copyright information. If you are >>>>>>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance >>>>>>>>>>> upon, disclosure or copying of this >>>>>>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If >>>>>>>>>>> you have received this email in >>>>>>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 >>>>>>>>>>> 0832 6348. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> >>>>>>>>> >>>>>>>>>>> >>>>>>>>> -- >>>>>>>>>>> >>>>>>>>> Mohammed Naser >>>>>>>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> -- >>>>>>>>>>> >>>>>>> Mohammed Naser >>>>>>>>>>> >>>>>>> VEXXHOST, Inc. >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> -- >>>>>>>>>>> >>>>>>> Cheers & Best regards, >>>>>>>>>>> >>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>> >>>>>>> Head of Research & Development >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> Catalyst Cloud >>>>>>>>>>> >>>>>>> Aotearoa's own >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>>>>>> Zealand >>>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>>> >>>>>>> >>>>>>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for >>>>>>>>>>> the named recipients only. >>>>>>>>>>> >>>>>>> It may contain privileged, confidential or >>>>>>>>>>> copyright information. If you are >>>>>>>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>>>>>>> disclosure or copying of this >>>>>>>>>>> >>>>>>> email or its attachments is unauthorised. If you >>>>>>>>>>> have received this email in >>>>>>>>>>> >>>>>>> error, please reply via email or call +64 21 0832 >>>>>>>>>>> 6348. >>>>>>>>>>> >>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> -- >>>>>>>>>>> >>>>> Regards, >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> Syed Ammad Ali >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> -- >>>>>>>>>>> >>> Regards, >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> Syed Ammad Ali >>>>>>>>>>> > >>>>>>>>>>> > -- >>>>>>>>>>> > Regards, >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > Syed Ammad Ali >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Mohammed Naser >>>>>>>>>>> VEXXHOST, Inc. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Regards, >>>>>>>>> >>>>>>>>> >>>>>>>>> Syed Ammad Ali >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Regards, >>>>>>> >>>>>>> >>>>>>> Syed Ammad Ali >>>>>>> >>>>>> >>>> >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali >>>> >>> > <29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log> > > <6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log> > >
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Cheers & Best regards,
Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Attached are the logs Regards Tony Karera On Wed, Sep 1, 2021 at 4:11 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Feilong,
I would be happy to join.
Can you please assist with the link .
Thanks a lot Regards
Tony Karera
On Wed, Sep 1, 2021 at 4:04 AM feilong <feilong@catalyst.net.nz> wrote:
Hi Karera,
Can you please share all the log under /var/log/heat-config/heat-config-script ? Or you can jump in #openstack-containers channel on OFTC, I'm online now.
On 1/09/21 1:51 pm, Karera Tony wrote:
Hey Feilong,
Thanks a lot.
The services are fine and indeed the log files are there in the directory [/var/log/heat-config/heat-config-script]
After checking, the master log is fine but the cluster log has this error below as I had mentioned earlier
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Tue, Aug 31, 2021 at 9:52 PM feilong <feilong@catalyst.net.nz> wrote:
Hi Karea,
Given you can see heat-container-agent container from podman which means you should be able to see logs from below path:
[root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# ls
c64e6ac2-db7e-4786-a387-1d45359812b8-k8s-100-eh6s5l6d73ie-kube_cluster_config-uxpsylgnayjy.log
fa1f6247-51a8-4e70-befa-cbc61ee99e59-k8s-100-eh6s5l6d73ie-kube_masters-kmi423lgbjw3-0-oii7uzemq7aj-master_config-dhfam54i456j.log [root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# pwd /var/log/heat-config/heat-config-script
If you can not see the path and the log, then it means the heat-container-agent didn't work well. You need to check the service status by systemctl command and check the log by journalctl. From there, you should be able to see why the cluster failed.
On 1/09/21 1:41 am, Karera Tony wrote:
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use to check.
Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com> wrote:
> Here is the beginning of the Log > > Starting to run kube-apiserver-to-kubelet-role > + echo 'Waiting for Kubernetes API...' > Waiting for Kubernetes API... > ++ kubectl get --raw=/healthz > The connection to the server localhost:8080 was refused - did you > specify the right host or port? > + '[' ok = '' ']' > > > Regards > > Tony Karera > > > > > On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com> > wrote: > >> I assume these are from the master nodes? Can you share the logs >> shortly after creation rather than when it times out? I think there is some >> missing logs from the top. >> >> Sent from my iPhone >> >> On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com> wrote: >> >> >> Hello Guys, >> >> Attached are the two logs from the >> /var/log/heat-config/heat-config-script directory >> Regards >> >> Tony Karera >> >> >> >> >> On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com> >> wrote: >> >>> Dear Sir, >>> >>> You are right. >>> >>> I am getting this error >>> >>> kubectl get --raw=/healthz >>> The connection to the server localhost:8080 was refused - did you >>> specify the right host or port? >>> >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >>> >>> On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar < >>> bharat@stackhpc.com> wrote: >>> >>>> I’d check the logs under /var/log/heat-config. >>>> >>>> Sent from my iPhone >>>> >>>> On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com> >>>> wrote: >>>> >>>> >>>> DeaR Ammad, >>>> >>>> I was able to make the communication work and the Worker nodes >>>> were created as well but the cluster failed. >>>> >>>> I logged in to the master node and there was no error but below >>>> are the error when I run systemctl status heat-container-agent on the >>>> worker noed. >>>> >>>> Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: >>>> /var/lib/os-collect-config/local-data not found. Skipping >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed < >>>> syedammad83@gmail.com> wrote: >>>> >>>>> Yes, keystone, Heat, Barbicane and magnum public endpoints must >>>>> be reachable from master and worker nodes. >>>>> >>>>> You can use below guide for the reference as well. >>>>> >>>>> >>>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11 >>>>> >>>>> Ammad >>>>> >>>>> On Wed, Aug 25, 2021 at 12:08 PM Karera Tony < >>>>> tonykarera@gmail.com> wrote: >>>>> >>>>>> Hello Ammad, >>>>>> >>>>>> I have deployed using the given image but I think there is an >>>>>> issue with keystone as per the screen shot below when I checked the master >>>>>> node's heat-container-agent status >>>>>> >>>>>> <image.png> >>>>>> >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony < >>>>>> tonykarera@gmail.com> wrote: >>>>>> >>>>>>> Hello Ammad, >>>>>>> >>>>>>> I actually first used that one and it was also getting stuck. >>>>>>> >>>>>>> I will try this one again and update you with the Logs though. >>>>>>> >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Tony Karera >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed < >>>>>>> syedammad83@gmail.com> wrote: >>>>>>> >>>>>>>> It seems from the logs that you are using fedora atomic. Can >>>>>>>> you try with FCOS 32 image. >>>>>>>> >>>>>>>> >>>>>>>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010... >>>>>>>> >>>>>>>> Ammad >>>>>>>> >>>>>>>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony < >>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>> >>>>>>>>> Hello Sir, >>>>>>>>> >>>>>>>>> Attached is the Log file >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Tony Karera >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed < >>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Karera, >>>>>>>>>> >>>>>>>>>> Can you share us the full log file. >>>>>>>>>> >>>>>>>>>> Ammad >>>>>>>>>> >>>>>>>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony < >>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hello Guys, >>>>>>>>>>> >>>>>>>>>>> Thanks a lot for the help but unfortunately I dont see >>>>>>>>>>> much information in the log file indicating a failure apart from the log >>>>>>>>>>> that keeps appearing. >>>>>>>>>>> >>>>>>>>>>> <image.png> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser < >>>>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Also check out /var/log/cloud-init.log :) >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed < >>>>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>>>> > >>>>>>>>>>>> > Then check journalctl -xe or status of heat agent >>>>>>>>>>>> service status. >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > Ammad >>>>>>>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony < >>>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>>> >> >>>>>>>>>>>> >> Hello Ammad, >>>>>>>>>>>> >> >>>>>>>>>>>> >> There is no directory or log relevant to heat in the >>>>>>>>>>>> /var/log directory >>>>>>>>>>>> >> >>>>>>>>>>>> >> Regards >>>>>>>>>>>> >> >>>>>>>>>>>> >> Tony Karera >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed < >>>>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Hi Karera, >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Login to master node and check the logs of heat agent >>>>>>>>>>>> in var log. There must be something the cluster is stucking somewhere in >>>>>>>>>>>> creating. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Ammad >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony < >>>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> Hello Ammad, >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I had done as explained and it worked upto a certain >>>>>>>>>>>> point. The master node was created but the cluster remained in Creation in >>>>>>>>>>>> progress for over an hour and failed with error below >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> Stack Faults >>>>>>>>>>>> >>>> as follows: >>>>>>>>>>>> >>>> default-master >>>>>>>>>>>> >>>> Timed out >>>>>>>>>>>> >>>> default-worker >>>>>>>>>>>> >>>> Timed out >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> Regards >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> Tony Karera >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed < >>>>>>>>>>>> syedammad83@gmail.com> wrote: >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> Hi Tony, >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> You can try by creating your private vxlan network >>>>>>>>>>>> prior to deployment of cluster and explicitly create your cluster in vxlan >>>>>>>>>>>> network. >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> --fixed-network private --fixed-subnet >>>>>>>>>>>> private-subnet >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> You can specify above while creating a cluster. >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> Ammad >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony < >>>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> Hello MOhamed, >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I >>>>>>>>>>>> deploy it, It creates a fixed network using vlan which I am not using for >>>>>>>>>>>> internal networks. >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> When I create a a vxlan Network and use it in the >>>>>>>>>>>> cluster creation, It fails. Is there a trick around this ? >>>>>>>>>>>> >>>>>> Regards >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> Tony Karera >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> >>>>>>>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong < >>>>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that >>>>>>>>>>>> driver is well maintained. I didn't see any interest in the last 4 years >>>>>>>>>>>> since I involved in the Magnum project. If there is no specific reason, I >>>>>>>>>>>> would suggest go for k8s. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Please keep replies on list so others can help >>>>>>>>>>>> too. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is >>>>>>>>>>>> at this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony < >>>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> Hello Naser, >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> Please check below. >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> openstack coe cluster template create >>>>>>>>>>>> swarm-cluster-template1 \ >>>>>>>>>>>> >>>>>>>> --image >>>>>>>>>>>> fedora-atomic-latest \ >>>>>>>>>>>> >>>>>>>> --external-network >>>>>>>>>>>> External_1700\ >>>>>>>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>>>>>> >>>>>>>> --master-flavor m1.small \ >>>>>>>>>>>> >>>>>>>> --flavor m1.small \ >>>>>>>>>>>> >>>>>>>> --coe swarm >>>>>>>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>>>>>> >>>>>>>> --cluster-template >>>>>>>>>>>> swarm-cluster-template \ >>>>>>>>>>>> >>>>>>>> --master-count 1 \ >>>>>>>>>>>> >>>>>>>> --node-count 2 \ >>>>>>>>>>>> >>>>>>>> --keypair Newkey >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> Regards >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> Tony Karera >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> >>>>>>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser < >>>>>>>>>>>> mnaser@vexxhost.com> wrote: >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> What does your cluster template and cluster >>>>>>>>>>>> create command look like? >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony < >>>>>>>>>>>> tonykarera@gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my >>>>>>>>>>>> environment (at least not yet) and LB is not enabled on either the cluster >>>>>>>>>>>> template or the cluster itself. >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong < >>>>>>>>>>>> feilong@catalyst.net.nz> wrote: >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia >>>>>>>>>>>> deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and >>>>>>>>>>>> enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>>>> >>>>>>>>>>> ERROR: Property error: : >>>>>>>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong. >>>>>>>>>>>> Btw, I disabled load balancer while creating the cluster. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>>> >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>>> >>>>>>>>>>> Head of Research & Development >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>>>> >>>>>>>>>>> Aotearoa's own >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, >>>>>>>>>>>> New Zealand >>>>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is >>>>>>>>>>>> intended for the named recipients only. >>>>>>>>>>>> >>>>>>>>>>> It may contain privileged, confidential or >>>>>>>>>>>> copyright information. If you are >>>>>>>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance >>>>>>>>>>>> upon, disclosure or copying of this >>>>>>>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If >>>>>>>>>>>> you have received this email in >>>>>>>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21 >>>>>>>>>>>> 0832 6348. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> >>>>>>>>> >>>>>>>>>>>> >>>>>>>>> -- >>>>>>>>>>>> >>>>>>>>> Mohammed Naser >>>>>>>>>>>> >>>>>>>>> VEXXHOST, Inc. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> -- >>>>>>>>>>>> >>>>>>> Mohammed Naser >>>>>>>>>>>> >>>>>>> VEXXHOST, Inc. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> -- >>>>>>>>>>>> >>>>>>> Cheers & Best regards, >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>>> >>>>>>> Head of Research & Development >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Catalyst Cloud >>>>>>>>>>>> >>>>>>> Aotearoa's own >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz >>>>>>>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New >>>>>>>>>>>> Zealand >>>>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended >>>>>>>>>>>> for the named recipients only. >>>>>>>>>>>> >>>>>>> It may contain privileged, confidential or >>>>>>>>>>>> copyright information. If you are >>>>>>>>>>>> >>>>>>> not the named recipient, any use, reliance upon, >>>>>>>>>>>> disclosure or copying of this >>>>>>>>>>>> >>>>>>> email or its attachments is unauthorised. If you >>>>>>>>>>>> have received this email in >>>>>>>>>>>> >>>>>>> error, please reply via email or call +64 21 0832 >>>>>>>>>>>> 6348. >>>>>>>>>>>> >>>>>>> >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> -- >>>>>>>>>>>> >>>>> Regards, >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> Syed Ammad Ali >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> -- >>>>>>>>>>>> >>> Regards, >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Syed Ammad Ali >>>>>>>>>>>> > >>>>>>>>>>>> > -- >>>>>>>>>>>> > Regards, >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > Syed Ammad Ali >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Mohammed Naser >>>>>>>>>>>> VEXXHOST, Inc. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Regards, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Syed Ammad Ali >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Regards, >>>>>>>> >>>>>>>> >>>>>>>> Syed Ammad Ali >>>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>>>> >>>> >> <29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log> >> >> <6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log> >> >>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Cheers & Best regards,
Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Your cluster failed because of https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templa... So please just remove your label "tls_enabled=true" and try again. On 1/09/21 2:27 pm, Karera Tony wrote:
Attached are the logs Regards
Tony Karera
On Wed, Sep 1, 2021 at 4:11 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Feilong,
I would be happy to join.
Can you please assist with the link .
Thanks a lot Regards
Tony Karera
On Wed, Sep 1, 2021 at 4:04 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote:
Hi Karera,
Can you please share all the log under /var/log/heat-config/heat-config-script ? Or you can jump in #openstack-containers channel on OFTC, I'm online now.
On 1/09/21 1:51 pm, Karera Tony wrote:
Hey Feilong,
Thanks a lot.
The services are fine and indeed the log files are there in the directory [/var/log/heat-config/heat-config-script]
After checking, the master log is fine but the cluster log has this error below as I had mentioned earlier
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Tue, Aug 31, 2021 at 9:52 PM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote:
Hi Karea,
Given you can see heat-container-agent container from podman which means you should be able to see logs from below path:
[root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# ls c64e6ac2-db7e-4786-a387-1d45359812b8-k8s-100-eh6s5l6d73ie-kube_cluster_config-uxpsylgnayjy.log fa1f6247-51a8-4e70-befa-cbc61ee99e59-k8s-100-eh6s5l6d73ie-kube_masters-kmi423lgbjw3-0-oii7uzemq7aj-master_config-dhfam54i456j.log [root@k8s-100-eh6s5l6d73ie-master-0 heat-config-script]# pwd /var/log/heat-config/heat-config-script
If you can not see the path and the log, then it means the heat-container-agent didn't work well. You need to check the service status by systemctl command and check the log by journalctl. From there, you should be able to see why the cluster failed.
On 1/09/21 1:41 am, Karera Tony wrote:
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use to check.
Every Kubectl command I run on either the master or worker. The connection to the server localhost:8080 was refused - did you specify the right host or port? I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Your hyperkube services are not started.
You need to check hyperkube services.
Ammad
On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Dear Ammad,
Below is the output of podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 319fbebc2f50 docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 <http://docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1> /usr/bin/start-he... 23 hours ago Up 23 hours ago heat-container-agent [root@k8s-cluster-2-4faiphvzsmzu-master-0 core]#
Regards
Tony Karera
On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
The output in logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd is incomplete.
There should be the installation and configuration of many other things that are missing. Also it looks that hyperkube is not installed.
Can you check the response of "podman ps" command on master nodes.
Ammad
On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Here is the beginning of the Log
Starting to run kube-apiserver-to-kubelet-role + echo 'Waiting for Kubernetes API...' Waiting for Kubernetes API... ++ kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port? + '[' ok = '' ']'
Regards
Tony Karera
On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat@stackhpc.com <mailto:bharat@stackhpc.com>> wrote:
I assume these are from the master nodes? Can you share the logs shortly after creation rather than when it times out? I think there is some missing logs from the top.
Sent from my iPhone
On 26 Aug 2021, at 06:14, Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Guys,
Attached are the two logs from the /var/log/heat-config/heat-config-script directory Regards
Tony Karera
On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Dear Sir,
You are right.
I am getting this error
kubectl get --raw=/healthz The connection to the server localhost:8080 was refused - did you specify the right host or port?
Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat@stackhpc.com <mailto:bharat@stackhpc.com>> wrote:
I’d check the logs under /var/log/heat-config.
Sent from my iPhone
On 25 Aug 2021, at 19:39, Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
DeaR Ammad,
I was able to make the communication work and the Worker nodes were created as well but the cluster failed.
I logged in to the master node and there was no error but below are the error when I run systemctl status heat-container-agent on the worker noed.
Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]: /var/lib/os-collect-config/local-data not found. Skipping Regards
Tony Karera
On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Yes, keystone, Heat, Barbicane and magnum public endpoints must be reachable from master and worker nodes.
You can use below guide for the reference as well.
https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
Ammad
On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Ammad,
I have deployed using the given image but I think there is an issue with keystone as per the screen shot below when I checked the master node's heat-container-agent status
<image.png>
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Ammad,
I actually first used that one and it was also getting stuck.
I will try this one again and update you with the Logs though.
Regards
Tony Karera
On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
It seems from the logs that you are using fedora atomic. Can you try with FCOS 32 image.
https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.202010...
Ammad
On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Sir,
Attached is the Log file
Regards
Tony Karera
On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote:
Hi Karera,
Can you share us the full log file.
Ammad
On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote:
Hello Guys,
Thanks a lot for the help but unfortunately I dont see much information in the log file indicating a failure apart from the log that keeps appearing.
<image.png>
Regards
Tony Karera
On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote:
Also check out /var/log/cloud-init.log :)
On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: > > Then check journalctl -xe or status of heat agent service status. > > > Ammad > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >> >> Hello Ammad, >> >> There is no directory or log relevant to heat in the /var/log directory >> >> Regards >> >> Tony Karera >> >> >> >> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: >>> >>> Hi Karera, >>> >>> Login to master node and check the logs of heat agent in var log. There must be something the cluster is stucking somewhere in creating. >>> >>> Ammad >>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>> >>>> Hello Ammad, >>>> >>>> I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below >>>> >>>> Stack Faults >>>> as follows: >>>> default-master >>>> Timed out >>>> default-worker >>>> Timed out >>>> >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: >>>>> >>>>> Hi Tony, >>>>> >>>>> You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network. >>>>> >>>>> --fixed-network private --fixed-subnet private-subnet >>>>> >>>>> You can specify above while creating a cluster. >>>>> >>>>> Ammad >>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>> >>>>>> Hello MOhamed, >>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks. >>>>>> >>>>>> When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? >>>>>> Regards >>>>>> >>>>>> Tony Karera >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: >>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s. >>>>>>> >>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote: >>>>>>> >>>>>>> Please keep replies on list so others can help too. >>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only. >>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>>>> >>>>>>>> Hello Naser, >>>>>>>> >>>>>>>> Please check below. >>>>>>>> >>>>>>>> openstack coe cluster template create swarm-cluster-template1 \ >>>>>>>> --image fedora-atomic-latest \ >>>>>>>> --external-network External_1700\ >>>>>>>> --dns-nameserver 8.8.8.8 \ >>>>>>>> --master-flavor m1.small \ >>>>>>>> --flavor m1.small \ >>>>>>>> --coe swarm >>>>>>>> openstack coe cluster create swarm-cluster \ >>>>>>>> --cluster-template swarm-cluster-template \ >>>>>>>> --master-count 1 \ >>>>>>>> --node-count 2 \ >>>>>>>> --keypair Newkey >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Tony Karera >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote: >>>>>>>>> >>>>>>>>> What does your cluster template and cluster create command look like? >>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: >>>>>>>>>> >>>>>>>>>> Hello Wang, >>>>>>>>>> >>>>>>>>>> Thanks for the feedback. >>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Tony Karera >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Karera, >>>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote: >>>>>>>>>>> >>>>>>>>>>> Hello Team, >>>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below. >>>>>>>>>>> >>>>>>>>>>> Status Reason >>>>>>>>>>> ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned >>>>>>>>>>> Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> Tony Karera >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Cheers & Best regards, >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>>>>>> Head of Research & Development >>>>>>>>>>> >>>>>>>>>>> Catalyst Cloud >>>>>>>>>>> Aotearoa's own >>>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Mohammed Naser >>>>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Mohammed Naser >>>>>>> VEXXHOST, Inc. >>>>>>> >>>>>>> -- >>>>>>> Cheers & Best regards, >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Feilong Wang (王飞龙) (he/him) >>>>>>> Head of Research & Development >>>>>>> >>>>>>> Catalyst Cloud >>>>>>> Aotearoa's own >>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >>>>>>> It may contain privileged, confidential or copyright information. If you are >>>>>>> not the named recipient, any use, reliance upon, disclosure or copying of this >>>>>>> email or its attachments is unauthorised. If you have received this email in >>>>>>> error, please reply via email or call +64 21 0832 6348. >>>>>>> ------------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> >>>>> Syed Ammad Ali >>> >>> >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali > > -- > Regards, > > > Syed Ammad Ali
--
Mohammed Naser VEXXHOST, Inc.
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
<29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log> <6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
-- Regards,
Syed Ammad Ali
-- Regards,
Syed Ammad Ali
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz> Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development Catalyst Cloud Aotearoa's own Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
Were master and worker nodes created? Did you log into the nodes and look at heat container agent logs under /var/log/heat-config/ ?
On 24 Aug 2021, at 11:41, Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com <mailto:syedammad83@gmail.com>> wrote: Hi Tony,
You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote: What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com <mailto:tonykarera@gmail.com>> wrote: Hello Wang,
Thanks for the feedback.
Unfortunately Octavia is not deployed in my environment (at least not yet) and LB is not enabled on either the cluster template or the cluster itself.
Regards
Tony Karera
On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz <mailto:feilong@catalyst.net.nz>> wrote: Hi Karera,
It's probably a bug. If you do have Octavia deployed, can you try to not disable the LB and see how it goes?
On 20/08/21 4:18 pm, Karera Tony wrote:
Hello Team,
I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however when I create a cluster I get the error below.
Status Reason ERROR: Property error: : resources.api_lb.properties: : Property allowed_cidrs not assigned Can someone advise on where I could be wrong. Btw, I disabled load balancer while creating the cluster.
Regards
Tony Karera
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz/> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------ -- Mohammed Naser VEXXHOST, Inc. -- Mohammed Naser VEXXHOST, Inc. -- Cheers & Best regards,
Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz <http://www.catalystcloud.nz/> Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Regards,
Syed Ammad Ali
Hello Bharat, Only the Master node gets created. There is no heat container agent on the compute node. Regards Tony Karera On Tue, Aug 24, 2021 at 12:45 PM Bharat Kunwar <bharat@stackhpc.com> wrote:
Were master and worker nodes created? Did you log into the nodes and look at heat container agent logs under /var/log/heat-config/ ?
On 24 Aug 2021, at 11:41, Karera Tony <tonykarera@gmail.com> wrote:
Hello Ammad,
I had done as explained and it worked upto a certain point. The master node was created but the cluster remained in Creation in progress for over an hour and failed with error below
Stack Faults as follows: default-master Timed out default-worker Timed out
Regards
Tony Karera
On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <syedammad83@gmail.com> wrote:
Hi Tony,
You can try by creating your private vxlan network prior to deployment of cluster and explicitly create your cluster in vxlan network.
--fixed-network private --fixed-subnet private-subnet
You can specify above while creating a cluster.
Ammad
On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello MOhamed,
I think the Kubernetes cluster is ok but it when I deploy it, It creates a fixed network using vlan which I am not using for internal networks.
When I create a a vxlan Network and use it in the cluster creation, It fails. Is there a trick around this ? Regards
Tony Karera
On Fri, Aug 20, 2021 at 9:00 AM feilong <feilong@catalyst.net.nz> wrote:
Oooh, are you using Swarm? I don't think that driver is well maintained. I didn't see any interest in the last 4 years since I involved in the Magnum project. If there is no specific reason, I would suggest go for k8s.
On 20/08/21 5:08 pm, Mohammed Naser wrote:
Please keep replies on list so others can help too.
I don’t know how well tested the Swarm driver is at this point. I believe most Magnum users are using it for Kubernetes only.
On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <tonykarera@gmail.com> wrote:
Hello Naser,
Please check below.
openstack coe cluster template create swarm-cluster-template1 \ --image fedora-atomic-latest \ --external-network External_1700\ --dns-nameserver 8.8.8.8 \ --master-flavor m1.small \ --flavor m1.small \ --coe swarm openstack coe cluster create swarm-cluster \ --cluster-template swarm-cluster-template \ --master-count 1 \ --node-count 2 \ --keypair Newkey
Regards
Tony Karera
On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
What does your cluster template and cluster create command look like?
On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <tonykarera@gmail.com> wrote:
> Hello Wang, > > Thanks for the feedback. > > Unfortunately Octavia is not deployed in my environment (at least > not yet) and LB is not enabled on either the cluster template or the > cluster itself. > > Regards > > Tony Karera > > > > > On Fri, Aug 20, 2021 at 6:52 AM feilong <feilong@catalyst.net.nz> > wrote: > >> Hi Karera, >> >> It's probably a bug. If you do have Octavia deployed, can you try >> to not disable the LB and see how it goes? >> >> >> On 20/08/21 4:18 pm, Karera Tony wrote: >> >> Hello Team, >> >> I deployed Openstack Wallby on Ubuntu20 and enabled Magum, however >> when I create a cluster I get the error below. >> >> >> *Status Reason ERROR: Property error: : >> resources.api_lb.properties: : Property allowed_cidrs not assigned* >> Can someone advise on where I could be wrong. Btw, I disabled load >> balancer while creating the cluster. >> >> Regards >> >> Tony Karera >> >> >> -- >> Cheers & Best regards, >> ------------------------------------------------------------------------------ >> Feilong Wang (王飞龙) (he/him) >> Head of Research & Development >> >> Catalyst Cloud >> Aotearoa's own >> >> Mob: +64 21 0832 6348 | www.catalystcloud.nz >> Level 6, 150 Willis Street, Wellington 6011, New Zealand <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g> >> >> CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. >> It may contain privileged, confidential or copyright information. If you are >> not the named recipient, any use, reliance upon, disclosure or copying of this >> email or its attachments is unauthorised. If you have received this email in >> error, please reply via email or call +64 21 0832 6348. >> ------------------------------------------------------------------------------ >> >> -- Mohammed Naser VEXXHOST, Inc.
-- Mohammed Naser VEXXHOST, Inc.
-- Cheers & Best regards, ------------------------------------------------------------------------------ Feilong Wang (王飞龙) (he/him) Head of Research & Development
Catalyst Cloud Aotearoa's own
Mob: +64 21 0832 6348 | www.catalystcloud.nz Level 6, 150 Willis Street, Wellington 6011, New Zealand
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may contain privileged, confidential or copyright information. If you are not the named recipient, any use, reliance upon, disclosure or copying of this email or its attachments is unauthorised. If you have received this email in error, please reply via email or call +64 21 0832 6348. ------------------------------------------------------------------------------
-- Regards,
Syed Ammad Ali
participants (6)
-
Ammad Syed
-
Bharat Kunwar
-
feilong
-
Ionut Biru
-
Karera Tony
-
Mohammed Naser