[Openstack] Magnum bay takes forever to create

Pavel Fedin p.fedin at samsung.com
Tue Jan 26 10:29:46 UTC 2016


Hello!

 

I have checked log files, and i wrote about it in my message. There are no errors.

Magnum conductor keeps issuing the same request over and over:

--- cut ---

2016-01-26 11:35:08.904 DEBUG heatclient.common.http [req-2b444c74-5daf-4f42-9a40-07f756faae8b None None] 

HTTP/1.1 200 OK

Date: Tue, 26 Jan 2016 08:35:08 GMT

Connection: keep-alive

Content-Type: application/json; charset=UTF-8

Content-Length: 728

X-Openstack-Request-Id: req-5120412c-ee22-4045-95e8-9f642afa0b7d

 

{"stacks": [{"description": "This template will boot a Kubernetes cluster with one or more minions (as specified by the number_of_minions parameter, which defaults to 1).\n", "parent": null, "stack_status_reason": "Stack CREATE started", "stack_name": "None-2bw32rqna7fg", "stack_user_project_id": "144514ec35c14141bbcdf6850ddafe6f", "tags": null, "creation_time": "2016-01-26T07:36:17", "links": [{"href": "http://106.109.131.169:8004/v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/None-2bw32rqna7fg/b89e92e7-235e-4477-bc8d-ab0cbe185f8b", "rel": "self"}], "updated_time": null, "project": "141dbc1cd0154ae397a9b00371f99eaf", "stack_owner": null, "stack_status": "CREATE_IN_PROGRESS", "id": "b89e92e7-235e-4477-bc8d-ab0cbe185f8b"}]}

from (pid=6517) log_http_response /usr/lib/python2.7/site-packages/heatclient/common/http.py:142

2016-01-26 11:35:19.467 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks._send_bay_metrics from (pid=6517) run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215

2016-01-26 11:35:19.467 DEBUG magnum.service.periodic [req-6a68cac7-c3f8-4382-9644-10caae6ae3f5 None None] Starting to send bay metrics from (pid=6517) _send_bay_metrics /opt/stack/magnum/magnum/service/periodic.py:149

2016-01-26 11:35:21.600 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumServicePeriodicTasks.update_magnum_service from (pid=6517) run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215

2016-01-26 11:35:21.600 DEBUG magnum.servicegroup.magnum_service_periodic [req-2fead759-735c-4088-a5d9-27f5e8f04828 None None] Update magnum_service from (pid=6517) update_magnum_service /opt/stack/magnum/magnum/servicegroup/magnum_service_periodic.py:42

2016-01-26 11:36:09.771 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync_bay_status from (pid=6517) run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215

2016-01-26 11:36:09.772 DEBUG magnum.service.periodic [req-51329bff-0757-4e9e-8b18-56e26aa16f8a None None] Starting to sync up bay status from (pid=6517) sync_bay_status /opt/stack/magnum/magnum/service/periodic.py:71

2016-01-26 11:36:09.780 DEBUG keystoneclient.auth.identity.v3.base [req-51329bff-0757-4e9e-8b18-56e26aa16f8a None None] Making authentication request to http://106.109.131.169:5000/v3/auth/tokens from (pid=6517) get_auth_ref /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3/base.py:188

2016-01-26 11:36:09.864 DEBUG heatclient.common.http [req-51329bff-0757-4e9e-8b18-56e26aa16f8a None None] curl -g -i -X GET -H 'X-Auth-Token: {SHA1}6352991719a247283206995c5b59015d71b35b67' -H 'Content-Type: application/json' -H 'X-Auth-Url: http://106.109.131.169:5000/v3' -H 'Accept: application/json' -H 'User-Agent: python-heatclient' http://106.109.131.169:8004/v1/31d99128330d4e15a11410ae4be95767/stacks?id=b89e92e7-235e-4477-bc8d-ab0cbe185f8b&global_tenant=True from (pid=6517) log_curl_request /usr/lib/python2.7/site-packages/heatclient/common/http.py:129

2016-01-26 11:36:09.921 DEBUG heatclient.common.http [req-51329bff-0757-4e9e-8b18-56e26aa16f8a None None] 

HTTP/1.1 200 OK

Date: Tue, 26 Jan 2016 08:36:09 GMT

Connection: keep-alive

Content-Type: application/json; charset=UTF-8

Content-Length: 728

X-Openstack-Request-Id: req-171ae752-941d-45d1-b01a-fba36e7d2667

 

{"stacks": [{"description": "This template will boot a Kubernetes cluster with one or more minions (as specified by the number_of_minions parameter, which defaults to 1).\n", "parent": null, "stack_status_reason": "Stack CREATE started", "stack_name": "None-2bw32rqna7fg", "stack_user_project_id": "144514ec35c14141bbcdf6850ddafe6f", "tags": null, "creation_time": "2016-01-26T07:36:17", "links": [{"href": "http://106.109.131.169:8004/v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/None-2bw32rqna7fg/b89e92e7-235e-4477-bc8d-ab0cbe185f8b", "rel": "self"}], "updated_time": null, "project": "141dbc1cd0154ae397a9b00371f99eaf", "stack_owner": null, "stack_status": "CREATE_IN_PROGRESS", "id": "b89e92e7-235e-4477-bc8d-ab0cbe185f8b"}]}

from (pid=6517) log_http_response /usr/lib/python2.7/site-packages/heatclient/common/http.py:142

--- cut ---

 

This is from heat API:

--- cut ---

 

2016-01-26 12:50:52.469 DEBUG eventlet.wsgi.server [req-be149504-53d6-4510-95ce-3f65d082a41f None demo] 106.109.131.169 - - [26/Jan/2016 12:50:52] "GET /v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/None-2bw32rqna7fg/b89e92e7-235e-4477-bc8d-ab0cbe185f8b HTTP/1.1" 200 3828 1.241434 from (pid=7189) write /opt/stack/heat/heat/common/wsgi.py:265

2016-01-26 12:50:52.471 DEBUG eventlet.wsgi.server [-] (7189) accepted ('106.109.131.169', 39613) from (pid=7189) write /opt/stack/heat/heat/common/wsgi.py:265

2016-01-26 12:50:52.472 DEBUG heat.api.middleware.version_negotiation [-] Processing request: GET /v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/b89e92e7-235e-4477-bc8d-ab0cbe185f8b/template Accept: application/json from (pid=7189) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:50

2016-01-26 12:50:52.472 DEBUG heat.api.middleware.version_negotiation [-] Matched versioned URI. Version: 1.0 from (pid=7189) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:65

2016-01-26 12:50:52.495 DEBUG oslo_policy._cache_handler [req-9fc6161a-8355-4521-9c1d-3cdf1442ba84 None demo] Reloading cached file /etc/heat/policy.json from (pid=7189) read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/_cache_handler.py:38

2016-01-26 12:50:52.496 DEBUG oslo_policy.policy [req-9fc6161a-8355-4521-9c1d-3cdf1442ba84 None demo] Reloaded policy file: /etc/heat/policy.json from (pid=7189) _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:493

2016-01-26 12:50:52.497 DEBUG heat.common.wsgi [req-9fc6161a-8355-4521-9c1d-3cdf1442ba84 None demo] Calling <heat.api.openstack.v1.stacks.StackController object at 0x49cc290> : lookup from (pid=7189) __call__ /opt/stack/heat/heat/common/wsgi.py:850

2016-01-26 12:50:52.498 DEBUG oslo_messaging._drivers.amqpdriver [req-9fc6161a-8355-4521-9c1d-3cdf1442ba84 None demo] CALL msg_id: 5bb186f7cc98436e9c26004cab3b5d8a exchange 'heat' topic 'engine' from (pid=7189) _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:448

2016-01-26 12:50:52.518 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 5bb186f7cc98436e9c26004cab3b5d8a from (pid=7189) __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:296

2016-01-26 12:50:52.521 DEBUG eventlet.wsgi.server [req-9fc6161a-8355-4521-9c1d-3cdf1442ba84 None demo] 106.109.131.169 - - [26/Jan/2016 12:50:52] "GET /v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/b89e92e7-235e-4477-bc8d-ab0cbe185f8b/template HTTP/1.1" 302 571 0.048802 from (pid=7189) write /opt/stack/heat/heat/common/wsgi.py:265

2016-01-26 12:50:52.523 DEBUG eventlet.wsgi.server [-] (7189) accepted ('106.109.131.169', 39614) from (pid=7189) write /opt/stack/heat/heat/common/wsgi.py:265

2016-01-26 12:50:52.524 DEBUG heat.api.middleware.version_negotiation [-] Processing request: GET /v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/None-2bw32rqna7fg/b89e92e7-235e-4477-bc8d-ab0cbe185f8b/template Accept: application/json from (pid=7189) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:50

2016-01-26 12:50:52.524 DEBUG heat.api.middleware.version_negotiation [-] Matched versioned URI. Version: 1.0 from (pid=7189) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:65

2016-01-26 12:50:52.553 DEBUG oslo_policy._cache_handler [req-2949d7e3-0eaf-4acb-81d5-3e529ae67baa None demo] Reloading cached file /etc/heat/policy.json from (pid=7189) read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/_cache_handler.py:38

2016-01-26 12:50:52.554 DEBUG oslo_policy.policy [req-2949d7e3-0eaf-4acb-81d5-3e529ae67baa None demo] Reloaded policy file: /etc/heat/policy.json from (pid=7189) _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:493

2016-01-26 12:50:52.555 DEBUG heat.common.wsgi [req-2949d7e3-0eaf-4acb-81d5-3e529ae67baa None demo] Calling <heat.api.openstack.v1.stacks.StackController object at 0x49cc290> : template from (pid=7189) __call__ /opt/stack/heat/heat/common/wsgi.py:850

2016-01-26 12:50:52.556 DEBUG oslo_messaging._drivers.amqpdriver [req-2949d7e3-0eaf-4acb-81d5-3e529ae67baa None demo] CALL msg_id: 4de1122b97664aa783e55753564c31c1 exchange 'heat' topic 'engine' from (pid=7189) _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:448

2016-01-26 12:50:52.583 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 4de1122b97664aa783e55753564c31c1 from (pid=7189) __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:296

2016-01-26 12:50:52.584 DEBUG heat.common.serializers [req-2949d7e3-0eaf-4acb-81d5-3e529ae67baa None demo] JSON response : {"outputs": {"kube_masters_private": {"description": "This is a list of the \"private\" addresses of all the Kubernetes masters.\n", "value": {"get_attr": ["kube_masters", "kube_master_ip"]}}, "kube_masters": {"description": "This is a list of \"public\" ip addresses of all Kubernetes master servers. Use these addresses to log in to the Kubernetes masters via ssh.\n", "value": {"get_attr": ["kube_masters", "kube_master_external_ip"]}}, "api_address": {"description": "This is the API endpoint of the Kubernetes server. Use this to access the Kubernetes API from outside the cluster.\n", "value": {"str_replace": {"params": {"api_ip_address": {"get_attr": ["api_pool_floating", "floating_ip_address"]}}, "template": "api_ip_address"}}}, "kube_minions_private": {"description": "This is a list of the \"private\" addresses of all the Kubernetes minions.\n", "value": {"get_attr": ["kube_minions", "kube_minion_ip"]}}, "kube_minions": {"description": "This is a list of the \"public\" addresses of all the Kubernetes minions. Use these addresses to, e.g., log into the minions.\n", "value": {"get_attr": ["kube_minions", "kube_minion_external_ip"]}}, "registry_address": {"description": "This is the url of docker registry server where you can store docker images.", "value": {"str_replace": {"params": {"port": {"get_param": "registry_port"}}, "template": "localhost:port"}}}}, "heat_template_version": "2013-05-23", "description": "This template will boot a Kubernetes cluster with one or more minions (as specified by the number_of_minions parameter, which defaults to 1).\n", "parameters": {"fixed_network_cidr": {"default": "10.0.0.0/24", "type": "string", "description": "network range for fixed ip network"}, "registry_auth_url": {"default": "auth_url", "type": "string", "description": "auth_url for keystone"}, "magnum_url": {"type": "string", "description": "endpoint to retrieve TLS certs from"}, "number_of_masters": {"default": 1, "type": "number", "description": "how many kubernetes masters to spawn"}, "tenant_name": {"type": "string", "description": "tenant name\n"}, "bay_uuid": {"type": "string", "description": "identifier for the bay this template is generating"}, "registry_region": {"default": "region", "type": "string", "description": "region of swift service"}, "minion_flavor": {"default": "m1.small", "type": "string", "description": "flavor to use when booting the server"}, "portal_network_cidr": {"default": "10.254.0.0/16", "type": "string", "description": "address range used by kubernetes for service portals\n"}, "auth_url": {"type": "string", "description": "url for kubernetes to authenticate before sending request to neutron must be v2 since kubernetes backend only suppor v2 at this point\n"}, "wait_condition_timeout": {"default": 6000, "type": "number", "description": "timeout for the Wait Conditions\n"}, "kubernetes_port": {"default": 6443, "type": "number", "description": "The port which are used by kube-apiserver to provide Kubernetes service.\n"}, "external_network": {"default": "public", "type": "string", "description": "uuid/name of a network to use for floating ip addresses"}, "registry_port": {"default": 5000, "type": "number", "description": "port of registry service"}, "registry_password": {"default": "password", "hidden": true, "type": "string", "description": "password used by docker registry"}, "registry_domain": {"default": "domain", "type": "string", "description": "domain used by docker registry"}, "master_flavor": {"default": "m1.small", "type": "string", "description": "flavor to use when booting the server"}, "minions_to_remove": {"default": [], "type": "comma_delimited_list", "description": "List of minions to be removed when doing an update. Individual minion may be referenced several ways: (1) The resource name (e.g. ['1', '3']), (2) The private IP address ['10.0.0.4', '10.0.0.6']. Note: the list should be empty when doing an create.\n"}, "https_proxy": {"default": "", "type": "string", "description": "https proxy address for docker"}, "tls_disabled": {"default": false, "type": "boolean", "description": "whether or not to disable TLS"}, "ssh_key_name": {"type": "string", "description": "name of ssh key to be provisioned on our server"}, "username": {"type": "string", "description": "user account\n"}, "http_proxy": {"default": "", "type": "string", "description": "http proxy address for docker"}, "docker_volume_size": {"default": 25, "type": "number", "description": "size of a cinder volume to allocate to docker for container/image storage\n"}, "registry_container": {"default": "container", "type": "string", "description": "name of swift container which docker registry stores images in\n"}, "registry_trust_id": {"default": "trust_id", "hidden": true, "type": "string", "description": "trust_id used by docker registry"}, "registry_enabled": {"default": false, "type": "boolean", "description": "Indicates whether the docker registry is enabled.\n"}, "kube_allow_priv": {"default": "true", "type": "string", "description": "whether or not kubernetes should permit privileged containers.\n", "constraints": [{"allowed_values": ["true", "false"]}]}, "password": {"default": "ChangeMe", "type": "string", "description": "user password, not set in current implementation, only used to fill in for Kubernetes config file\n"}, "loadbalancing_protocol": {"default": "TCP", "type": "string", "description": "The protocol which is used for load balancing. If you want to change tls_disabled option to 'True', please change this to \"HTTP\".\n", "constraints": [{"allowed_values": ["TCP", "HTTP"]}]}, "flannel_use_vxlan": {"default": "false", "type": "string", "description": "if true use the vxlan backend, otherwise use the default udp backend\n", "constraints": [{"allowed_values": ["true", "false"]}]}, "registry_username": {"default": "username", "type": "string", "description": "username used by docker registry"}, "flannel_network_subnetlen": {"default": 24, "type": "string", "description": "size of subnet assigned to each minion"}, "registry_chunksize": {"default": 5242880, "type": "number", "description": "size fo the data segments for the swift dynamic large objects\n"}, "user_token": {"type": "string", "description": "token used for communicating back to Magnum for TLS certs"}, "network_driver": {"default": "flannel", "type": "string", "description": "network driver to use for instantiating container networks"}, "no_proxy": {"default": "", "type": "string", "description": "no proxies for docker"}, "number_of_minions": {"default": 1, "type": "number", "description": "how many kubernetes minions to spawn"}, "registry_insecure": {"default": true, "type": "boolean", "description": "indicates whether to skip TLS verification between registry and backend storage\n"}, "flannel_network_cidr": {"default": "10.100.0.0/16", "type": "string", "description": "network range for flannel overlay network"}, "discovery_url": {"type": "string", "description": "Discovery URL used for bootstrapping the etcd cluster.\n"}, "dns_nameserver": {"default": "8.8.8.8", "type": "string", "description": "address of a dns nameserver reachable in your environment"}, "server_image": {"type": "string", "description": "glance image used to boot the server"}}, "resources": {"api_monitor": {"type": "OS::Neutron::HealthMonitor", "properties": {"delay": 5, "max_retries": 5, "type": "TCP", "timeout": 5}}, "extrouter_inside": {"type": "OS::Neutron::RouterInterface", "properties": {"router_id": {"get_resource": "extrouter"}, "subnet": {"get_resource": "fixed_subnet"}}}, "kube_masters": {"depends_on": ["extrouter_inside"], "type": "OS::Heat::ResourceGroup", "properties": {"count": {"get_param": "number_of_masters"}, "resource_def": {"type": "file:///opt/stack/magnum/magnum/templates/kubernetes/kubemaster.yaml", "properties": {"magnum_url": {"get_param": "magnum_url"}, "tenant_name": {"get_param": "tenant_name"}, "bay_uuid": {"get_param": "bay_uuid"}, "http_proxy": {"get_param": "http_proxy"}, "api_pool_id": {"get_resource": "api_pool"}, "user_token": {"get_param": "user_token"}, "auth_url": {"get_param": "auth_url"}, "wait_condition_timeout": {"get_param": "wait_condition_timeout"}, "kubernetes_port": {"get_param": "kubernetes_port"}, "external_network": {"get_param": "external_network"}, "fixed_subnet": {"get_resource": "fixed_subnet"}, "api_public_address": {"get_attr": ["api_pool_floating", "floating_ip_address"]}, "api_private_address": {"get_attr": ["api_pool", "vip", "address"]}, "master_flavor": {"get_param": "master_flavor"}, "https_proxy": {"get_param": "https_proxy"}, "tls_disabled": {"get_param": "tls_disabled"}, "username": {"get_param": "username"}, "docker_volume_size": {"get_param": "docker_volume_size"}, "secgroup_base_id": {"get_resource": "secgroup_base"}, "kube_allow_priv": {"get_param": "kube_allow_priv"}, "secgroup_kube_master_id": {"get_resource": "secgroup_kube_master"}, "password": {"get_param": "password"}, "flannel_use_vxlan": {"get_param": "flannel_use_vxlan"}, "portal_network_cidr": {"get_param": "portal_network_cidr"}, "etcd_pool_id": {"get_resource": "etcd_pool"}, "network_driver": {"get_param": "network_driver"}, "fixed_network": {"get_resource": "fixed_network"}, "no_proxy": {"get_param": "no_proxy"}, "ssh_key_name": {"get_param": "ssh_key_name"}, "flannel_network_subnetlen": {"get_param": "flannel_network_subnetlen"}, "flannel_network_cidr": {"get_param": "flannel_network_cidr"}, "discovery_url": {"get_param": "discovery_url"}, "server_image": {"get_param": "server_image"}}}}}, "etcd_monitor": {"type": "OS::Neutron::HealthMonitor", "properties": {"delay": 5, "max_retries": 5, "type": "TCP", "timeout": 5}}, "secgroup_kube_master": {"type": "OS::Neutron::SecurityGroup", "properties": {"rules": [{"protocol": "tcp", "port_range_max": 7080, "port_range_min": 7080}, {"protocol": "tcp", "port_range_max": 8080, "port_range_min": 8080}, {"protocol": "tcp", "port_range_max": 2379, "port_range_min": 2379}, {"protocol": "tcp", "port_range_max": 2380, "port_range_min": 2380}, {"protocol": "tcp", "port_range_max": 6443, "port_range_min": 6443}, {"protocol": "tcp", "port_range_max": 32767, "port_range_min": 30000}]}}, "secgroup_kube_minion": {"type": "OS::Neutron::SecurityGroup", "properties": {"rules": [{"protocol": "icmp"}, {"protocol": "tcp"}, {"protocol": "udp"}]}}, "etcd_pool": {"type": "OS::Neutron::Pool", "properties": {"subnet": {"get_resource": "fixed_steway_info": {"network": {"get_param": "external_network"}}}}, "fixed_network": {"type": "OS::Neutron::Net", "properties": {"name": "private"}}, "api_pool": {"type": "OS::Neutron::Pool", "properties": {"subnet": {"get_resource": "fixed_subnet"}, "vip": {"protocol_port": {"get_param": "kubernetes_port"}}, "lb_method": "ROUND_ROBIN", "protocol": {"get_param": "loadbalancing_protocol"}, "monitors": [{"get_resource": "api_monitor"}]}}, "kube_minions": {"depends_on": ["extrouter_inside", "kube_masters"], "type": "OS::Heat::ResourceGroup", "properties": {"count": {"get_param": "number_of_minions"}, "resource_def": {"type": "file:///opt/stack/magnum/magnum/templates/kubernetes/kubeminion.yaml", "properties": {"registry_auth_url": {"get_param": "registry_auth_url"}, "magnum_url": {"get_param": "magnum_url"}, "wait_condition_timeout": {"get_param": "wait_condition_timeout"}, "bay_uuid": {"get_param": "bay_uuid"}, "http_proxy": {"get_param": "http_proxy"}, "minion_flavor": {"get_param": "minion_flavor"}, "user_token": {"get_param": "user_token"}, "registry_container": {"get_param": "registry_container"}, "kubernetes_port": {"get_param": "kubernetes_port"}, "registry_password": {"get_param": "registry_password"}, "registry_port": {"get_param": "registry_port"}, "external_network": {"get_param": "external_network"}, "fixed_subnet": {"get_resource": "fixed_subnet"}, "secgroup_kube_minion_id": {"get_resource": "secgroup_kube_minion"}, "registry_domain": {"get_param": "registry_domain"}, "no_proxy": {"get_param": "no_proxy"}, "https_proxy": {"get_param": "https_proxy"}, "tls_disabled": {"get_param": "tls_disabled"}, "registry_region": {"get_param": "registry_region"}, "docker_volume_size": {"get_param": "docker_volume_size"}, "registry_trust_id": {"get_param": "registry_trust_id"}, "registry_enabled": {"get_param": "registry_enabled"}, "kube_allow_priv": {"get_param": "kube_allow_priv"}, "etcd_server_ip": {"get_attr": ["etcd_pool", "vip", "address"]}, "kube_master_ip": {"get_attr": ["api_pool", "vip", "address"]}, "registry_username": {"get_param": "registry_username"}, "registry_chunksize": {"get_param": "registry_chunksize"}, "network_driver": {"get_param": "network_driver"}, "fixed_network": {"get_resource": "fixed_network"}, "ssh_key_name": {"get_param": "ssh_key_name"}, "registry_insecure": {"get_param": "registry_insecure"}, "server_image": {"get_param": "server_image"}}}, "removal_policies": [{"resource_list": {"get_param": "minions_to_remove"}}]}}, "secgroup_base": {"type": "OS::Neutron::SecurityGroup", "properties": {"rules": [{"protocol": "icmp"}, {"protocol": "tcp", "port_range_max": 22, "port_range_min": 22}]}}, "api_pool_floating": {"depends_on": ["extrouter_inside"], "type": "OS::Neutron::FloatingIP", "properties": {"floating_network": {"get_param": "external_network"}, "port_id": {"get_attr": ["api_pool", "vip", "port_id"]}}}, "fixed_subnet": {"type": "OS::Neutron::Subnet", "properties": {"cidr": {"get_param": "fixed_network_cidr"}, "dns_nameservers": [{"get_param": "dns_nameserver"}], "network": {"get_resource": "fixed_network"}}}}} from (pid=7189) to_json /opt/stack/heat/heat/common/serializers.py:40

2016-01-26 12:50:52.585 DEBUG eventlet.wsgi.server [req-2949d7e3-0eaf-4acb-81d5-3e529ae67baa None demo] 106.109.131.169 - - [26/Jan/2016 12:50:52] "GET /v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/None-2bw32rqna7fg/b89e92e7-235e-4477-bc8d-ab0cbe185f8b/template HTTP/1.1" 200 13943 0.061665 from (pid=7189) write /opt/stack/heat/heat/common/wsgi.py:265

2016-01-26 12:50:52.649 DEBUG eventlet.wsgi.server [-] (7189) accepted ('106.109.131.169', 39615) from (pid=7189) write /opt/stack/heat/heat/common/wsgi.py:265

2016-01-26 12:50:52.650 DEBUG heat.api.middleware.version_negotiation [-] Processing request: GET /v1/141dbc1cd0154ae397a9b00371f99eaf/stacks/None-2bw32rqna7fg/b89e92e7-235e-4477-bc8d-ab0cbe185f8b/events Accept: application/json from (pid=7189) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:50

2016-01-26 12:50:52.650 DEBUG heat.api.middleware.version_negotiation [-] Matched versioned URI. Version: 1.0 from (pid=7189) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:65

2016-01-26 12:50:52.673 DEBUG oslo_policy._cache_handler [req-d94a86c3-f4a8-43c6-8195-6ad3e6757c1c None demo] Reloading cached file /etc/heat/policy.json from (pid=7189) read_cached_file /usr/lib/python2.7/site-packages/oslo_policy/_cache_handler.py:38

2016-01-26 12:50:52.675 DEBUG oslo_policy.policy [req-d94a86c3-f4a8-43c6-8195-6ad3e6757c1c None demo] Reloaded policy file: /etc/heat/policy.json from (pid=7189) _load_policy_file /usr/lib/python2.7/site-packages/oslo_policy/policy.py:493

--- cut ---

 

And this is Cinder API:

--- cut ---

2016-01-26 12:21:44.002 DEBUG eventlet.wsgi.server [-] (7022) accepted ('106.109.131.169', 34800) from (pid=7022) server /usr/lib/python2.7/site-packages/eventlet/wsgi.py:826

2016-01-26 12:21:44.066 INFO cinder.api.openstack.wsgi [req-e048acc2-e27f-49e4-9ace-81d8a23e7f8c 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] GET http://106.109.131.169:8776/v2/141dbc1cd0154ae397a9b00371f99eaf/volumes/f44cafe8-d989-48f0-8c5b-b614cd4ed8eb

2016-01-26 12:21:44.066 DEBUG cinder.api.openstack.wsgi [req-e048acc2-e27f-49e4-9ace-81d8a23e7f8c 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] Empty body provided in request from (pid=7022) get_body /opt/stack/cinder/cinder/api/openstack/wsgi.py:867

2016-01-26 12:21:44.169 DEBUG object [req-e048acc2-e27f-49e4-9ace-81d8a23e7f8c 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] Cinder object Volume has no attribute named: type from (pid=7022) get /opt/stack/cinder/cinder/objects/base.py:260

2016-01-26 12:21:44.169 INFO cinder.volume.api [req-e048acc2-e27f-49e4-9ace-81d8a23e7f8c 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] Volume info retrieved successfully.

2016-01-26 12:21:44.183 INFO cinder.api.openstack.wsgi [req-e048acc2-e27f-49e4-9ace-81d8a23e7f8c 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] http://106.109.131.169:8776/v2/141dbc1cd0154ae397a9b00371f99eaf/volumes/f44cafe8-d989-48f0-8c5b-b614cd4ed8eb returned with HTTP 200

2016-01-26 12:21:44.184 INFO eventlet.wsgi.server [req-e048acc2-e27f-49e4-9ace-81d8a23e7f8c 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] 106.109.131.169 "GET /v2/141dbc1cd0154ae397a9b00371f99eaf/volumes/f44cafe8-d989-48f0-8c5b-b614cd4ed8eb HTTP/1.1" status: 200  len: 1316 time: 0.1816440

2016-01-26 12:21:45.188 DEBUG eventlet.wsgi.server [-] (7022) accepted ('106.109.131.169', 34802) from (pid=7022) server /usr/lib/python2.7/site-packages/eventlet/wsgi.py:826

2016-01-26 12:21:48.922 INFO cinder.api.openstack.wsgi [req-309fef17-0841-4fc0-ad72-36b6b2ed36d8 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] GET http://106.109.131.169:8776/v2/141dbc1cd0154ae397a9b00371f99eaf/volumes/f44cafe8-d989-48f0-8c5b-b614cd4ed8eb

2016-01-26 12:21:48.922 DEBUG cinder.api.openstack.wsgi [req-309fef17-0841-4fc0-ad72-36b6b2ed36d8 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] Empty body provided in request from (pid=7022) get_body /opt/stack/cinder/cinder/api/openstack/wsgi.py:867

2016-01-26 12:21:49.124 DEBUG object [req-309fef17-0841-4fc0-ad72-36b6b2ed36d8 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] Cinder object Volume has no attribute named: type from (pid=7022) get /opt/stack/cinder/cinder/objects/base.py:260

2016-01-26 12:21:49.124 INFO cinder.volume.api [req-309fef17-0841-4fc0-ad72-36b6b2ed36d8 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] Volume info retrieved successfully.

2016-01-26 12:21:49.143 INFO cinder.api.openstack.wsgi [req-309fef17-0841-4fc0-ad72-36b6b2ed36d8 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] http://106.109.131.169:8776/v2/141dbc1cd0154ae397a9b00371f99eaf/volumes/f44cafe8-d989-48f0-8c5b-b614cd4ed8eb returned with HTTP 200

2016-01-26 12:21:49.144 INFO eventlet.wsgi.server [req-309fef17-0841-4fc0-ad72-36b6b2ed36d8 7d92d42ebf5c4c379b2e6f092c7863ba 141dbc1cd0154ae397a9b00371f99eaf] 106.109.131.169 "GET /v2/141dbc1cd0154ae397a9b00371f99eaf/volumes/f44cafe8-d989-48f0-8c5b-b614cd4ed8eb HTTP/1.1" status: 200  len: 1316 time: 3.9552031

--- cut ---

 

That’s all the activity.

 

Kind regards,

Pavel Fedin

Senior Engineer

Samsung Electronics Research center Russia

 

From: Jay Lau [mailto:jay.lau.513 at gmail.com] 
Sent: 26 января 2016 г. 12:08
To: Pavel Fedin
Cc: openstack at lists.openstack.org
Subject: Re: [Openstack] Magnum bay takes forever to create

 

I think that you can check heat log first to see if there are anything wrong with the stack creation, then check related component log such as cinder, neutron etc to see what is wrong.

 

On Tue, Jan 26, 2016 at 3:53 PM, Pavel Fedin <p.fedin at samsung.com <mailto:p.fedin at samsung.com> > wrote:

 Hello everybody!

 I am quite new to OpenStack, and need to dive into it because of goals of
our project. So far, I have installed devstack and tried to create Magnum
bay. And I have hard time doing it. I successfully defeated "missing
losetup" problem, and now I have the next one. Creation process never goes
past "kube_masters create in progress" point. I see that Cinder volume is
initialized and in "Available" state, but it seems to be not attached to
anything (as far as I currently understand things, it has to be attached to
VM instance holding my bay). There are no error messages in logs. How to
debug this and does anybody know, what can be the reason ?

Kind regards,
Pavel Fedin
Senior Engineer
Samsung Electronics Research center Russia



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org <mailto:openstack at lists.openstack.org> 
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




-- 

Thanks,

Jay Lau (Guangya Liu)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20160126/c394f82e/attachment.html>


More information about the Openstack mailing list