Wallaby Magnum Issue
Karera Tony
tonykarera at gmail.com
Tue Aug 31 13:41:03 UTC 2021
Dear Ammad,
Sorry to bother you again but I have failed to get the right command to use
to check.
Every Kubectl command I run on either the master or worker.
The connection to the server localhost:8080 was refused - did you specify
the right host or port?
I get the error below.
Regards
Tony Karera
On Fri, Aug 27, 2021 at 9:15 AM Ammad Syed <syedammad83 at gmail.com> wrote:
> Your hyperkube services are not started.
>
> You need to check hyperkube services.
>
> Ammad
>
> On Fri, Aug 27, 2021 at 10:35 AM Karera Tony <tonykarera at gmail.com> wrote:
>
>> Dear Ammad,
>>
>> Below is the output of podman ps
>>
>> CONTAINER ID IMAGE
>> COMMAND CREATED STATUS PORTS NAMES
>> 319fbebc2f50
>> docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1
>> /usr/bin/start-he... 23 hours ago Up 23 hours ago
>> heat-container-agent
>> [root at k8s-cluster-2-4faiphvzsmzu-master-0 core]#
>>
>>
>> Regards
>>
>> Tony Karera
>>
>>
>>
>>
>> On Thu, Aug 26, 2021 at 9:54 AM Ammad Syed <syedammad83 at gmail.com> wrote:
>>
>>> The output in
>>> logfile 29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd
>>> is incomplete.
>>>
>>> There should be the installation and configuration of many other things
>>> that are missing. Also it looks that hyperkube is not installed.
>>>
>>> Can you check the response of "podman ps" command on master nodes.
>>>
>>> Ammad
>>>
>>> On Thu, Aug 26, 2021 at 11:30 AM Karera Tony <tonykarera at gmail.com>
>>> wrote:
>>>
>>>> Here is the beginning of the Log
>>>>
>>>> Starting to run kube-apiserver-to-kubelet-role
>>>> + echo 'Waiting for Kubernetes API...'
>>>> Waiting for Kubernetes API...
>>>> ++ kubectl get --raw=/healthz
>>>> The connection to the server localhost:8080 was refused - did you
>>>> specify the right host or port?
>>>> + '[' ok = '' ']'
>>>>
>>>>
>>>> Regards
>>>>
>>>> Tony Karera
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Aug 26, 2021 at 7:53 AM Bharat Kunwar <bharat at stackhpc.com>
>>>> wrote:
>>>>
>>>>> I assume these are from the master nodes? Can you share the logs
>>>>> shortly after creation rather than when it times out? I think there is some
>>>>> missing logs from the top.
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On 26 Aug 2021, at 06:14, Karera Tony <tonykarera at gmail.com> wrote:
>>>>>
>>>>>
>>>>> Hello Guys,
>>>>>
>>>>> Attached are the two logs from the
>>>>> /var/log/heat-config/heat-config-script directory
>>>>> Regards
>>>>>
>>>>> Tony Karera
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Aug 26, 2021 at 5:59 AM Karera Tony <tonykarera at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Dear Sir,
>>>>>>
>>>>>> You are right.
>>>>>>
>>>>>> I am getting this error
>>>>>>
>>>>>> kubectl get --raw=/healthz
>>>>>> The connection to the server localhost:8080 was refused - did you
>>>>>> specify the right host or port?
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Tony Karera
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Aug 25, 2021 at 10:55 PM Bharat Kunwar <bharat at stackhpc.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I’d check the logs under /var/log/heat-config.
>>>>>>>
>>>>>>> Sent from my iPhone
>>>>>>>
>>>>>>> On 25 Aug 2021, at 19:39, Karera Tony <tonykarera at gmail.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> DeaR Ammad,
>>>>>>>
>>>>>>> I was able to make the communication work and the Worker nodes were
>>>>>>> created as well but the cluster failed.
>>>>>>>
>>>>>>> I logged in to the master node and there was no error but below are
>>>>>>> the error when I run systemctl status heat-container-agent on the worker
>>>>>>> noed.
>>>>>>>
>>>>>>> Aug 25 17:52:24 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:52:55 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:53:26 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:53:57 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:54:28 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:54:59 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:55:29 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:56:00 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:56:31 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Aug 25 17:57:02 cluster1-fmkpva3nozf7-node-0 podman[2268]:
>>>>>>> /var/lib/os-collect-config/local-data not found. Skipping
>>>>>>> Regards
>>>>>>>
>>>>>>> Tony Karera
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Aug 25, 2021 at 10:38 AM Ammad Syed <syedammad83 at gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yes, keystone, Heat, Barbicane and magnum public endpoints must be
>>>>>>>> reachable from master and worker nodes.
>>>>>>>>
>>>>>>>> You can use below guide for the reference as well.
>>>>>>>>
>>>>>>>>
>>>>>>>> https://www.server-world.info/en/note?os=Ubuntu_20.04&p=openstack_victoria4&f=11
>>>>>>>>
>>>>>>>> Ammad
>>>>>>>>
>>>>>>>> On Wed, Aug 25, 2021 at 12:08 PM Karera Tony <tonykarera at gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hello Ammad,
>>>>>>>>>
>>>>>>>>> I have deployed using the given image but I think there is an
>>>>>>>>> issue with keystone as per the screen shot below when I checked the master
>>>>>>>>> node's heat-container-agent status
>>>>>>>>>
>>>>>>>>> <image.png>
>>>>>>>>>
>>>>>>>>> Regards
>>>>>>>>>
>>>>>>>>> Tony Karera
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Aug 25, 2021 at 8:28 AM Karera Tony <tonykarera at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hello Ammad,
>>>>>>>>>>
>>>>>>>>>> I actually first used that one and it was also getting stuck.
>>>>>>>>>>
>>>>>>>>>> I will try this one again and update you with the Logs though.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Regards
>>>>>>>>>>
>>>>>>>>>> Tony Karera
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Aug 25, 2021 at 8:25 AM Ammad Syed <syedammad83 at gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> It seems from the logs that you are using fedora atomic. Can you
>>>>>>>>>>> try with FCOS 32 image.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.20201004.3.0/x86_64/fedora-coreos-32.20201004.3.0-openstack.x86_64.qcow2.xz
>>>>>>>>>>>
>>>>>>>>>>> Ammad
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Aug 25, 2021 at 11:20 AM Karera Tony <
>>>>>>>>>>> tonykarera at gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hello Sir,
>>>>>>>>>>>>
>>>>>>>>>>>> Attached is the Log file
>>>>>>>>>>>>
>>>>>>>>>>>> Regards
>>>>>>>>>>>>
>>>>>>>>>>>> Tony Karera
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Aug 25, 2021 at 7:31 AM Ammad Syed <
>>>>>>>>>>>> syedammad83 at gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Karera,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you share us the full log file.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ammad
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Aug 25, 2021 at 9:42 AM Karera Tony <
>>>>>>>>>>>>> tonykarera at gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hello Guys,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks a lot for the help but unfortunately I dont see much
>>>>>>>>>>>>>> information in the log file indicating a failure apart from the log that
>>>>>>>>>>>>>> keeps appearing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <image.png>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tony Karera
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Aug 24, 2021 at 8:12 PM Mohammed Naser <
>>>>>>>>>>>>>> mnaser at vexxhost.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Also check out /var/log/cloud-init.log :)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Aug 24, 2021 at 1:39 PM Ammad Syed <
>>>>>>>>>>>>>>> syedammad83 at gmail.com> wrote:
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> > Then check journalctl -xe or status of heat agent service
>>>>>>>>>>>>>>> status.
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> > Ammad
>>>>>>>>>>>>>>> > On Tue, Aug 24, 2021 at 10:36 PM Karera Tony <
>>>>>>>>>>>>>>> tonykarera at gmail.com> wrote:
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> Hello Ammad,
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> There is no directory or log relevant to heat in the
>>>>>>>>>>>>>>> /var/log directory
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> Regards
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> Tony Karera
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> On Tue, Aug 24, 2021 at 12:43 PM Ammad Syed <
>>>>>>>>>>>>>>> syedammad83 at gmail.com> wrote:
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> Hi Karera,
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> Login to master node and check the logs of heat agent in
>>>>>>>>>>>>>>> var log. There must be something the cluster is stucking somewhere in
>>>>>>>>>>>>>>> creating.
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> Ammad
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> On Tue, Aug 24, 2021 at 3:41 PM Karera Tony <
>>>>>>>>>>>>>>> tonykarera at gmail.com> wrote:
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> Hello Ammad,
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> I had done as explained and it worked upto a certain
>>>>>>>>>>>>>>> point. The master node was created but the cluster remained in Creation in
>>>>>>>>>>>>>>> progress for over an hour and failed with error below
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> Stack Faults
>>>>>>>>>>>>>>> >>>> as follows:
>>>>>>>>>>>>>>> >>>> default-master
>>>>>>>>>>>>>>> >>>> Timed out
>>>>>>>>>>>>>>> >>>> default-worker
>>>>>>>>>>>>>>> >>>> Timed out
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> Regards
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> Tony Karera
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> On Tue, Aug 24, 2021 at 9:25 AM Ammad Syed <
>>>>>>>>>>>>>>> syedammad83 at gmail.com> wrote:
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> Hi Tony,
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> You can try by creating your private vxlan network
>>>>>>>>>>>>>>> prior to deployment of cluster and explicitly create your cluster in vxlan
>>>>>>>>>>>>>>> network.
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> --fixed-network private --fixed-subnet private-subnet
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> You can specify above while creating a cluster.
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> Ammad
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> On Tue, Aug 24, 2021 at 11:59 AM Karera Tony <
>>>>>>>>>>>>>>> tonykarera at gmail.com> wrote:
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>> Hello MOhamed,
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>> I think the Kubernetes cluster is ok but it when I
>>>>>>>>>>>>>>> deploy it, It creates a fixed network using vlan which I am not using for
>>>>>>>>>>>>>>> internal networks.
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>> When I create a a vxlan Network and use it in the
>>>>>>>>>>>>>>> cluster creation, It fails. Is there a trick around this ?
>>>>>>>>>>>>>>> >>>>>> Regards
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>> Tony Karera
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>>
>>>>>>>>>>>>>>> >>>>>> On Fri, Aug 20, 2021 at 9:00 AM feilong <
>>>>>>>>>>>>>>> feilong at catalyst.net.nz> wrote:
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> Oooh, are you using Swarm? I don't think that driver
>>>>>>>>>>>>>>> is well maintained. I didn't see any interest in the last 4 years since I
>>>>>>>>>>>>>>> involved in the Magnum project. If there is no specific reason, I would
>>>>>>>>>>>>>>> suggest go for k8s.
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> On 20/08/21 5:08 pm, Mohammed Naser wrote:
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> Please keep replies on list so others can help too.
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> I don’t know how well tested the Swarm driver is at
>>>>>>>>>>>>>>> this point. I believe most Magnum users are using it for Kubernetes only.
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> On Fri, Aug 20, 2021 at 1:05 AM Karera Tony <
>>>>>>>>>>>>>>> tonykarera at gmail.com> wrote:
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>> Hello Naser,
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>> Please check below.
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>> openstack coe cluster template create
>>>>>>>>>>>>>>> swarm-cluster-template1 \
>>>>>>>>>>>>>>> >>>>>>>> --image fedora-atomic-latest \
>>>>>>>>>>>>>>> >>>>>>>> --external-network
>>>>>>>>>>>>>>> External_1700\
>>>>>>>>>>>>>>> >>>>>>>> --dns-nameserver 8.8.8.8 \
>>>>>>>>>>>>>>> >>>>>>>> --master-flavor m1.small \
>>>>>>>>>>>>>>> >>>>>>>> --flavor m1.small \
>>>>>>>>>>>>>>> >>>>>>>> --coe swarm
>>>>>>>>>>>>>>> >>>>>>>> openstack coe cluster create swarm-cluster \
>>>>>>>>>>>>>>> >>>>>>>> --cluster-template
>>>>>>>>>>>>>>> swarm-cluster-template \
>>>>>>>>>>>>>>> >>>>>>>> --master-count 1 \
>>>>>>>>>>>>>>> >>>>>>>> --node-count 2 \
>>>>>>>>>>>>>>> >>>>>>>> --keypair Newkey
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>> Regards
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>> Tony Karera
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>> On Fri, Aug 20, 2021 at 7:03 AM Mohammed Naser <
>>>>>>>>>>>>>>> mnaser at vexxhost.com> wrote:
>>>>>>>>>>>>>>> >>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>> What does your cluster template and cluster create
>>>>>>>>>>>>>>> command look like?
>>>>>>>>>>>>>>> >>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>> On Fri, Aug 20, 2021 at 12:59 AM Karera Tony <
>>>>>>>>>>>>>>> tonykarera at gmail.com> wrote:
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>> Hello Wang,
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>> Thanks for the feedback.
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>> Unfortunately Octavia is not deployed in my
>>>>>>>>>>>>>>> environment (at least not yet) and LB is not enabled on either the cluster
>>>>>>>>>>>>>>> template or the cluster itself.
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>> Regards
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>> Tony Karera
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>> On Fri, Aug 20, 2021 at 6:52 AM feilong <
>>>>>>>>>>>>>>> feilong at catalyst.net.nz> wrote:
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> Hi Karera,
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> It's probably a bug. If you do have Octavia
>>>>>>>>>>>>>>> deployed, can you try to not disable the LB and see how it goes?
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> On 20/08/21 4:18 pm, Karera Tony wrote:
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> Hello Team,
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> I deployed Openstack Wallby on Ubuntu20 and
>>>>>>>>>>>>>>> enabled Magum, however when I create a cluster I get the error below.
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> Status Reason
>>>>>>>>>>>>>>> >>>>>>>>>>> ERROR: Property error: :
>>>>>>>>>>>>>>> resources.api_lb.properties: : Property allowed_cidrs not assigned
>>>>>>>>>>>>>>> >>>>>>>>>>> Can someone advise on where I could be wrong.
>>>>>>>>>>>>>>> Btw, I disabled load balancer while creating the cluster.
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> Regards
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> Tony Karera
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> --
>>>>>>>>>>>>>>> >>>>>>>>>>> Cheers & Best regards,
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>>>>>> >>>>>>>>>>> Feilong Wang (王飞龙) (he/him)
>>>>>>>>>>>>>>> >>>>>>>>>>> Head of Research & Development
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> Catalyst Cloud
>>>>>>>>>>>>>>> >>>>>>>>>>> Aotearoa's own
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz
>>>>>>>>>>>>>>> >>>>>>>>>>> Level 6, 150 Willis Street, Wellington 6011,
>>>>>>>>>>>>>>> New Zealand
>>>>>>>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email is intended
>>>>>>>>>>>>>>> for the named recipients only.
>>>>>>>>>>>>>>> >>>>>>>>>>> It may contain privileged, confidential or
>>>>>>>>>>>>>>> copyright information. If you are
>>>>>>>>>>>>>>> >>>>>>>>>>> not the named recipient, any use, reliance upon,
>>>>>>>>>>>>>>> disclosure or copying of this
>>>>>>>>>>>>>>> >>>>>>>>>>> email or its attachments is unauthorised. If you
>>>>>>>>>>>>>>> have received this email in
>>>>>>>>>>>>>>> >>>>>>>>>>> error, please reply via email or call +64 21
>>>>>>>>>>>>>>> 0832 6348.
>>>>>>>>>>>>>>> >>>>>>>>>>>
>>>>>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>>>>>> >>>>>>>>>
>>>>>>>>>>>>>>> >>>>>>>>> --
>>>>>>>>>>>>>>> >>>>>>>>> Mohammed Naser
>>>>>>>>>>>>>>> >>>>>>>>> VEXXHOST, Inc.
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> --
>>>>>>>>>>>>>>> >>>>>>> Mohammed Naser
>>>>>>>>>>>>>>> >>>>>>> VEXXHOST, Inc.
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> --
>>>>>>>>>>>>>>> >>>>>>> Cheers & Best regards,
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>>>>>> >>>>>>> Feilong Wang (王飞龙) (he/him)
>>>>>>>>>>>>>>> >>>>>>> Head of Research & Development
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> Catalyst Cloud
>>>>>>>>>>>>>>> >>>>>>> Aotearoa's own
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> Mob: +64 21 0832 6348 | www.catalystcloud.nz
>>>>>>>>>>>>>>> >>>>>>> Level 6, 150 Willis Street, Wellington 6011, New
>>>>>>>>>>>>>>> Zealand
>>>>>>>>>>>>>>> <https://www.google.com/maps/search/150+Willis+Street,+Wellington+6011,+New+Zealand?entry=gmail&source=g>
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email is intended for
>>>>>>>>>>>>>>> the named recipients only.
>>>>>>>>>>>>>>> >>>>>>> It may contain privileged, confidential or copyright
>>>>>>>>>>>>>>> information. If you are
>>>>>>>>>>>>>>> >>>>>>> not the named recipient, any use, reliance upon,
>>>>>>>>>>>>>>> disclosure or copying of this
>>>>>>>>>>>>>>> >>>>>>> email or its attachments is unauthorised. If you
>>>>>>>>>>>>>>> have received this email in
>>>>>>>>>>>>>>> >>>>>>> error, please reply via email or call +64 21 0832
>>>>>>>>>>>>>>> 6348.
>>>>>>>>>>>>>>> >>>>>>>
>>>>>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> --
>>>>>>>>>>>>>>> >>>>> Regards,
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> Syed Ammad Ali
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> --
>>>>>>>>>>>>>>> >>> Regards,
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> Syed Ammad Ali
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> > --
>>>>>>>>>>>>>>> > Regards,
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> > Syed Ammad Ali
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Mohammed Naser
>>>>>>>>>>>>>>> VEXXHOST, Inc.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Syed Ammad Ali
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Regards,
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Syed Ammad Ali
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>>
>>>>>>>> Syed Ammad Ali
>>>>>>>>
>>>>>>>
>>>>> <29a37aff-f1f6-46b3-8541-887491c6cfe8-k8s-cluster3-dcu52bgzpbuu-kube_masters-ocfrn2ikpcgd-0-32tmkqgdq7wl-master_config-gihyfv3wlyzd.log>
>>>>>
>>>>> <6fca39b1-8eda-4786-8424-e5b04434cce7-k8s-cluster3-dcu52bgzpbuu-kube_cluster_config-aht4it6p7wfk.log>
>>>>>
>>>>>
>>>
>>> --
>>> Regards,
>>>
>>>
>>> Syed Ammad Ali
>>>
>> --
> Regards,
>
>
> Syed Ammad Ali
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210831/2a605465/attachment-0001.html>
More information about the openstack-discuss
mailing list