[Magnum] Cluster Create failure

Navdeep Uniyal navdeep.uniyal at bristol.ac.uk
Fri Mar 29 12:15:41 UTC 2019


Hi Guys,

I am able to resolve the issue in nova. (it was a problem with the oslo.db version - Somehow I installed version 4.44 instead of 4.25 for my pike installation) 
However, moving forward, I started my kube cluster, I could see 2 instances running for Kube-master and kube-minion. But the deployment failed after that with the following error:

{"message": "The resource was found at <a href=\"http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x/384d8725-bca3-4fa4-a9fd-f18687aab8fb/resources?status=FAILED&nested_d
epth=2\">http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x/384d8725-bca3-4fa4-a9fd-f18687aab8fb/resources?status=FAILED&nested_depth=2</a>;\nyou should be redirected au
tomatically.\n\n", "code": "302 Found", "title": "Found"}
 log_http_response /var/lib/magnum/env/local/lib/python2.7/site-packages/heatclient/common/http.py:157
2019-03-29 12:05:51.225 157681 DEBUG heatclient.common.http [req-76e55dec-9511-4aad-aa52-af9978b40eed - - - - -] curl -g -i -X GET -H 'User-Agent: python-heatclient' -H 'Content-Type: application/json' -H 'X-Aut
h-Url: http://pluto:5000/v3' -H 'Accept: application/json' -H 'X-Auth-Token: {SHA1}f2c32656c7103ad0b89d83ff9f1b6cebc0a6eee7' http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgka
nhoa4x/384d8725-bca3-4fa4-a9fd-f18687aab8fb/resources?status=FAILED&nested_depth=2 log_curl_request /var/lib/magnum/env/local/lib/python2.7/site-packages/heatclient/common/http.py:144
2019-03-29 12:05:51.379 157681 DEBUG heatclient.common.http [req-76e55dec-9511-4aad-aa52-af9978b40eed - - - - -]
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 4035
X-Openstack-Request-Id: req-942ef8fa-1bba-4573-9022-0d4e135772e0
Date: Fri, 29 Mar 2019 12:05:51 GMT
Connection: keep-alive

{"resources": [{"resource_name": "kube_cluster_deploy", "links": [{"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x/384d8725-bca3-4fa4-a9fd-f18687aab8fb/resources/kube_cluster_deploy", "rel": "self"}, {"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x/384d8725-bca3-4fa4-a9fd-f18687aab8fb", "rel": "stack"}], "logical_resource_id": "kube_cluster_deploy", "creation_time": "2019-03-29T10:40:00Z", "resource_status": "CREATE_FAILED", "updated_time": "2019-03-29T10:40:00Z", "required_by": [], "resource_status_reason": "CREATE aborted (Task create from SoftwareDeployment \"kube_cluster_deploy\" Stack \"kubernetes-cluster-eovgkanhoa4x\" [384d8725-bca3-4fa4-a9fd-f18687aab8fb] Timed out)", "physical_resource_id": "8d715a3f-6ec8-4772-ba4b-1056cd4ab7d3", "resource_type": "OS::Heat::SoftwareDeployment"}, {"resource_name": "kube_minions", "links": [{"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x/384d8725-bca3-4fa4-a9fd-f18687aab8fb/resources/kube_minions", "rel": "self"}, {"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x/384d8725-bca3-4fa4-a9fd-f18687aab8fb", "rel": "stack"}, {"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x-kube_minions-otcpiw3oye46/33700819-0766-4d30-954b-29aace6048cc", "rel": "nested"}], "logical_resource_id": "kube_minions", "creation_time": "2019-03-29T10:40:00Z", "resource_status_reason": "CREATE aborted (Task create from ResourceGroup \"kube_minions\" Stack \"kubernetes-cluster-eovgkanhoa4x\" [384d8725-bca3-4fa4-a9fd-f18687aab8fb] Timed out)", "updated_time": "2019-03-29T10:40:00Z", "required_by": [], "resource_status": "CREATE_FAILED", "physical_resource_id": "33700819-0766-4d30-954b-29aace6048cc", "resource_type": "OS::Heat::ResourceGroup"}, {"parent_resource": "kube_minions", "resource_name": "0", "links": [{"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x-kube_minions-otcpiw3oye46/33700819-0766-4d30-954b-29aace6048cc/resources/0", "rel": "self"}, {"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x-kube_minions-otcpiw3oye46/33700819-0766-4d30-954b-29aace6048cc", "rel": "stack"}, {"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x-kube_minions-otcpiw3oye46-0-ftjzf76onzqn/d1a8214c-c5b0-488c-83d6-f0a9cacbe844", "rel": "nested"}], "logical_resource_id": "0", "creation_time": "2019-03-29T10:40:59Z", "resource_status_reason": "resources[0]: Stack CREATE cancelled", "updated_time": "2019-03-29T10:40:59Z", "required_by": [], "resource_status": "CREATE_FAILED", "physical_resource_id": "d1a8214c-c5b0-488c-83d6-f0a9cacbe844", "resource_type": "file:///var/lib/magnum/env/lib/python2.7/site-packages/magnum/drivers/k8s_fedora_atomic_v1/templates/kubeminion.yaml"}, {"parent_resource": "0", "resource_name": "minion_wait_condition", "links": [{"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x-kube_minions-otcpiw3oye46-0-ftjzf76onzqn/d1a8214c-c5b0-488c-83d6-f0a9cacbe844/resources/minion_wait_condition", "rel": "self"}, {"href": "http://pluto:8004/v1/4c6bc4445c764249921a0a6e40b192dd/stacks/kubernetes-cluster-eovgkanhoa4x-kube_minions-otcpiw3oye46-0-ftjzf76onzqn/d1a8214c-c5b0-488c-83d6-f0a9cacbe844", "rel": "stack"}], "logical_resource_id": "minion_wait_condition", "creation_time": "2019-03-29T10:41:01Z", "resource_status": "CREATE_FAILED", "updated_time": "2019-03-29T10:41:01Z", "required_by": [], "resource_status_reason": "CREATE aborted (Task create from HeatWaitCondition \"minion_wait_condition\" Stack \"kubernetes-cluster-eovgkanhoa4x-kube_minions-otcpiw3oye46-0-ftjzf76onzqn\" [d1a8214c-c5b0-488c-83d6-f0a9cacbe844] Timed out)", "physical_resource_id": "", "resource_type": "OS::Heat::WaitCondition"}]}

I am not sure how to debug this. Please advise.

Kind Regards,
Navdeep

-----Original Message-----
From: Mohammed Naser <mnaser at vexxhost.com> 
Sent: 28 March 2019 13:27
To: Navdeep Uniyal <navdeep.uniyal at bristol.ac.uk>
Cc: Bharat Kunwar <bharat at stackhpc.com>; openstack at lists.openstack.org
Subject: Re: [Magnum] Cluster Create failure

your placement service seems to be broken :)

On Thu, Mar 28, 2019 at 9:10 AM Navdeep Uniyal <navdeep.uniyal at bristol.ac.uk> wrote:
>
> Yes, there seems to be some issue with the server creation now.
> I will check and try resolving that. Thank you
>
> Regards,
> Navdeep
>
> -----Original Message-----
> From: Bharat Kunwar <bharat at stackhpc.com>
> Sent: 28 March 2019 12:40
> To: Navdeep Uniyal <navdeep.uniyal at bristol.ac.uk>
> Cc: openstack at lists.openstack.org
> Subject: Re: [Magnum] Cluster Create failure
>
> Can you create a server normally?
>


--
Mohammed Naser — vexxhost
-----------------------------------------------------
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mnaser at vexxhost.com
W. http://vexxhost.com


More information about the openstack-discuss mailing list