[Openstack] [Openstack-Ansible] Deploy errors on small setup
fv at spots.school
fv at spots.school
Mon Oct 23 05:44:25 UTC 2017
Hello!
I am trying to set up an OpenStack cluster but I am not having much
success. I have been fighting it for several weeks thus my decision
to join this list.
I have 4 hosts all running CentOS 7:
infra1 (Celeron J1900 CPU [4core, 2Ghz], 8GB RAM, 120GB SSD)
compute1 (Core i7 CPU [4core, 4Ghz], 16GB RAM, 120GB SSD)
log1 (AMD Athlon CPU [2core, 2Ghz], 3GB RAM, 120GB HDD)
storage1 (Xeon E3 CPU [4core, 2Ghz], 8GB RAM, 8TB RAID10)
Considering the none-too-powerful specs of infra1 and log1 I have
set the following services to run on metal:
aodh_container
ceilometer_central_container
cinder_api_container
cinder_scheduler_container
galera_container
glance_container
gnocchi_container
heat_apis_container
heat_engine_container
horizon_container
keystone_container
memcached_container
neutron_agents_container
neutron_server_container
nova_api_metadata_container
nova_api_os_compute_container
nova_api_placement_container
nova_conductor_container
nova_console_container
nova_scheduler_container
rabbit_mq_container
repo_container
rsyslog_container
When I run setup_hosts.yml I get errors on infra1. The specific
errors vary each time but it generally seems to fail with container
creation.
8<---8<---8<---8<---
TASK [lxc_container_create : LXC autodev setup]
********************************
Thursday 19 October 2017 17:00:28 -0700 (0:01:36.565) 0:57:32.820
******
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1_aodh_container-3ef9fdf1]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1_utility_container-7578d165]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1_horizon_container-4056733b]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
ok: [infra1_nova_scheduler_container-752fb34b -> 172.29.236.11]
ok: [infra1_keystone_container-23bb4cba -> 172.29.236.11]
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1_glance_container-299bd597]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
changed: [infra1_neutron_agents_container-e319526f -> 172.29.236.11]
changed: [infra1_cinder_scheduler_container-84442b11 -> 172.29.236.11]
changed: [infra1_neutron_server_container-d19ab320 -> 172.29.236.11]
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1_repo_container-9b73f4cd]: FAILED! => {"failed": true,
"msg": "Unexpected failure during module execution.", "stdout": ""}
ok: [infra1_cinder_api_container-05fbf13a -> 172.29.236.11]
ok: [infra1_nova_api_os_compute_container-99a9a1e0 -> 172.29.236.11]
ok: [infra1_nova_api_metadata_container-0a10aa4a -> 172.29.236.11]
ok: [infra1_galera_container-a3be12a1 -> 172.29.236.11]
ok: [infra1_nova_conductor_container-d8c2040f -> 172.29.236.11]
ok: [infra1_nova_console_container-e4a8d3ae -> 172.29.236.11]
ok: [infra1_gnocchi_container-e83732f5 -> 172.29.236.11]
ok: [infra1_rabbit_mq_container-4c8a4541 -> 172.29.236.11]
ok: [infra1_ceilometer_central_container-fe8f973b -> 172.29.236.11]
ok: [infra1_memcached_container-895a7ccf -> 172.29.236.11]
ok: [infra1_nova_api_placement_container-ec10eadb -> 172.29.236.11]
ok: [infra1_heat_apis_container-7579f33e -> 172.29.236.11]
ok: [infra1_heat_engine_container-2a26e880 -> 172.29.236.11]
8<---8<---8<---8<---
In case it is useful here is my user config:
---
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- 172.29.236.1
- "172.29.236.100,172.29.236.200"
- "172.29.240.100,172.29.240.200"
- "172.29.244.100,172.29.244.200"
global_overrides:
internal_lb_vip_address: 172.29.236.9
#
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxy_keepalived_external_vip_cidr.
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
#
external_lb_vip_address: 172.29.236.10
tunnel_bridge: "br-vxlan"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_over: "br-vlan"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
###
### Infrastructure
###
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.29.236.11
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.29.236.11
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts:
infra1:
ip: 172.29.236.11
# rsyslog server
log_hosts:
log1:
ip: 172.29.236.14
###
### OpenStack
###
# keystone
identity_hosts:
infra1:
ip: 172.29.236.11
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.29.236.11
# glance
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
image_hosts:
infra1:
ip: 172.29.236.11
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.15"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.29.236.11
# heat
orchestration_hosts:
infra1:
ip: 172.29.236.11
# horizon
dashboard_hosts:
infra1:
ip: 172.29.236.11
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.29.236.11
# ceilometer (telemetry data collection)
metering-infra_hosts:
infra1:
ip: 172.29.236.11
# aodh (telemetry alarm service)
metering-alarm_hosts:
infra1:
ip: 172.29.236.11
# gnocchi (telemetry metrics storage)
metrics_hosts:
infra1:
ip: 172.29.236.11
# nova hypervisors
compute_hosts:
compute1:
ip: 172.29.236.12
# ceilometer compute agent (telemetry data collection)
metering-compute_hosts:
compute1:
ip: 172.29.236.12
---
Thanks for any ideas!
FV
More information about the Openstack
mailing list