[Magnum] Cluster Create failure

Navdeep Uniyal navdeep.uniyal at bristol.ac.uk
Thu Mar 28 12:27:28 UTC 2019


Hi Bharat,

Thank you very much. It worked for me.
However, I am getting resource error while starting the cluster. I have enough resources available in the hypervisor, but it errors out:

faults               | {'0': 'ResourceInError: resources[0].resources.kube-master: Went to status ERROR due to "Message: No valid host was found. , Code: 500"', 'kube_masters': 'ResourceInError: resources.kube_masters.resources[0].resources.kube-master: Went to status ERROR due to "Message: No valid host was found. , Code: 500"', 'kube-master': 'ResourceInError: resources.kube-master: Went to status ERROR due to "Message: No valid host was found. , Code: 500"'} |

Nova Scheduler Error:

2019-03-28 11:36:37.686 79522 ERROR nova.scheduler.client.report [req-82c4fb8b-785b-40bf-82fc-ff9d0e6101a0 b4727cb329c14c388d777d0ce38c8a6b 4c6bc4445c764249921a0a6e40b192dd - default default] Failed to retrieve allocation candidates from placement API for filters {'VCPU': 4, 'MEMORY_MB': 4096, 'DISK_GB': 80}. Got 500: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator at
[no address given] to inform them of the time this error occurred,
and the actions you performed just before this error.</p>
<p>More information about this error may be available
in the server error log.</p>
<hr>
<address>Apache/2.4.18 (Ubuntu) Server at pluto Port 8778</address>
</body></html>

Please help me resolve this error.

Kind Regards,
Navdeep


From: Bharat Kunwar <bharat at stackhpc.com>
Sent: 27 March 2019 17:58
To: Navdeep Uniyal <navdeep.uniyal at bristol.ac.uk>
Cc: openstack at lists.openstack.org
Subject: Re: [Magnum] Cluster Create failure

Try this: docker_storage_driver=overlay2 and do not specify docker_volume_size.


On 27 Mar 2019, at 17:39, Navdeep Uniyal <navdeep.uniyal at bristol.ac.uk<mailto:navdeep.uniyal at bristol.ac.uk>> wrote:

Hi All,

I am trying to create a cluster using Magnum using :
magnum cluster-create kubernetes-cluster --cluster-template kubernetes-cluster-template --master-count 1 --node-count 1 --keypair magnum_key


My Template looks like this:
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| insecure_registry     | -                                    |
| labels                | {}                                   |
| updated_at            | -                                    |
| floating_ip_enabled   | True                                 |
| fixed_subnet          | -                                    |
| master_flavor_id      | small                                |
| user_id               | b4727cb329c14c388d777d0ce38c8a6b     |
| uuid                  | 24d01aea-c968-42e3-bcaa-a2e756aac5c7 |
| no_proxy              | -                                    |
| https_proxy           | -                                    |
| tls_disabled          | False                                |
| keypair_id            | -                                    |
| hidden                | False                                |
| project_id            | 4c6bc4445c764249921a0a6e40b192dd     |
| public                | False                                |
| http_proxy            | -                                    |
| docker_volume_size    | 4                                    |
| server_type           | vm                                   |
| external_network_id   | 5guknet                              |
| cluster_distro        | fedora-atomic                        |
| image_id              | fedora-atomic-latest                 |
| volume_driver         | -                                    |
| registry_enabled      | False                                |
| docker_storage_driver | devicemapper                         |
| apiserver_port        | -                                    |
| name                  | kubernetes-cluster-template          |
| created_at            | 2019-03-27T12:21:27+00:00            |
| network_driver        | flannel                              |
| fixed_network         | -                                    |
| coe                   | kubernetes                           |
| flavor_id             | small                                |
| master_lb_enabled     | False                                |
| dns_nameserver        | 8.8.8.8                              |
+-----------------------+--------------------------------------+


I am getting the following error:

{"explanation": "The server could not comply with the request since it is either malformed or otherwise incorrect.", "code": 400, "error": {"message": "ResourceTypeUnavailable: : resources.kube_masters<nested_st
ack>.resources.0<file:///var/lib/magnum/env/lib/python2.7/site-packages/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml>: : HEAT-E99001 Service cinder is not available for resource type Magnum::Opt
ional::Cinder::Volume, reason: cinder volumev3 endpoint is not in service catalog.", "traceback": null, "type": "StackValidationFailed"}, "title": "Bad Request"}
log_http_response /var/lib/magnum/env/local/lib/python2.7/site-packages/heatclient/common/http.py:157
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server [req-b4037dc5-2fe1-4533-a6e3-433609c6b22d - - - - -] Exception during message handling: InvalidParameterValue: ERROR: ResourceTypeUnavailable: : res
ources.kube_masters<nested_stack>.resources.0<file:///var/lib/magnum/env/lib/python2.7/site-packages/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml>: : HEAT-E99001 Service cinder is not available
for resource type Magnum::Optional::Cinder::Volume, reason: cinder volumev3 endpoint is not in service catalog.
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server   File "/var/lib/magnum/env/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 166, in _process_incoming
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server   File "/var/lib/magnum/env/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 265, in dispatch
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server   File "/var/lib/magnum/env/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _do_dispatch
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server   File "/var/lib/magnum/env/local/lib/python2.7/site-packages/magnum/conductor/handlers/cluster_conductor.py", line 80, in cluster_create
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server     raise e
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server InvalidParameterValue: ERROR: ResourceTypeUnavailable: : resources.kube_masters<nested_stack>.resources.0<file:///var/lib/magnum/env/lib/python2.7/s
ite-packages/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml>: : HEAT-E99001 Service cinder is not available for resource type Magnum::Optional::Cinder::Volume, reason: cinder volumev3 endpoint is
not in service catalog.
2019-03-27 17:28:11.174 145885 ERROR oslo_messaging.rpc.server



The magnum.conf file is as suggested in https://docs.openstack.org/magnum/pike/install/install-guide-from-source.html
I DO NOT have cinder in my openstack. I believe, it is optional. Please suggest how can I resolve this issue.



Kind Regards,
Navdeep

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190328/9fad2cf4/attachment-0001.html>


More information about the openstack-discuss mailing list