[Openstack] [Openstack-Ansible] Unable to install Openstack Queens using Ansible

Satish Patel satish.txt at gmail.com
Tue Sep 18 23:16:03 UTC 2018


You have 12GB on controller node where mysql going to run. I would say
you need to tweak some setting in
/etc/openstack_deploy/user_variables.yml file default setting is grab
as much resource you can so i would say play with those setting and
limit mem use

something like this.. and there are more to set.. following is
example, don't try in production.

## Galera settings
galera_monitoring_allowed_source: "192.168.100.246 192.168.100.239
192.168.100.88 192.168.100.3 192.168.100.2 192.168.100.1 1.1.1.1
2.2.2.2 127.0.0.1"
galera_innodb_buffer_pool_size: 16M
galera_innodb_log_buffer_size: 4M
galera_wsrep_provider_options:
 - { option: "gcache.size", value: "4M" }

## Neutron settings
neutron_metadata_checksum_fix: True

### Set workers for all services to optimise memory usage

## Repo
repo_nginx_threads: 2

## Keystone
keystone_httpd_mpm_start_servers: 2
keystone_httpd_mpm_min_spare_threads: 1
keystone_httpd_mpm_max_spare_threads: 2
keystone_httpd_mpm_thread_limit: 2
keystone_httpd_mpm_thread_child: 1
keystone_wsgi_threads: 1
keystone_wsgi_processes_max: 2

## Barbican
barbican_wsgi_processes: 2
barbican_wsgi_threads: 1

## Glance
glance_api_threads_max: 2
glance_api_threads: 1
glance_api_workers: 1
glance_registry_workers: 1

## Nova
nova_wsgi_threads: 1
nova_wsgi_processes_max: 2
nova_wsgi_processes: 2
nova_wsgi_buffer_size: 16384
nova_api_threads_max: 2
nova_api_threads: 1
nova_osapi_compute_workers: 1
nova_conductor_workers: 1
#    rbd_user: "{{ cinder_ceph_client }}"
#    rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
#    report_discard_supported: true


## Glance
glance_api_threads_max: 2
glance_api_threads: 1
glance_api_workers: 1
glance_registry_workers: 1

## Nova
nova_wsgi_threads: 1
nova_wsgi_processes_max: 2
nova_wsgi_processes: 2
nova_wsgi_buffer_size: 16384
nova_api_threads_max: 2
nova_api_threads: 1
nova_osapi_compute_workers: 1
nova_conductor_workers: 1
nova_metadata_workers: 1
## tux - new console (spice-html5 has been removed)
nova_console_type: novnc
## Tux - Live migration
nova_libvirtd_listen_tls: 0
nova_libvirtd_listen_tcp: 1
nova_libvirtd_auth_tcp: "none"

## Neutron
neutron_rpc_workers: 1
neutron_metadata_workers: 1
neutron_api_workers: 1
neutron_api_threads_max: 2
neutron_api_threads: 2
neutron_num_sync_threads: 1
neutron_linuxbridge_agent_ini_overrides:
  linux_bridge:
    physical_interface_mappings: vlan:br-vlan

## Heat
heat_api_workers: 1
heat_api_threads_max: 2
heat_api_threads: 1
heat_wsgi_threads: 1
heat_wsgi_processes_max: 2
heat_wsgi_processes: 1
heat_wsgi_buffer_size: 16384

## Horizon
horizon_wsgi_processes: 1
horizon_wsgi_threads: 1
horizon_wsgi_threads_max: 2

## Ceilometer
ceilometer_notification_workers_max: 2
ceilometer_notification_workers: 1

## AODH
aodh_wsgi_threads: 1
aodh_wsgi_processes_max: 2
aodh_wsgi_processes: 1

## Gnocchi
gnocchi_wsgi_threads: 1
gnocchi_wsgi_processes_max: 2
gnocchi_wsgi_processes: 1

## Swift
swift_account_server_replicator_workers: 1
swift_server_replicator_workers: 1
swift_object_replicator_workers: 1
swift_account_server_workers: 1
swift_container_server_workers: 1
swift_object_server_workers: 1
swift_proxy_server_workers_max: 2
swift_proxy_server_workers_not_capped: 1
swift_proxy_server_workers_capped: 1
## Heat
heat_api_workers: 1
heat_api_threads_max: 2
heat_api_threads: 1
heat_wsgi_threads: 1
heat_wsgi_processes_max: 2
heat_wsgi_processes: 1
heat_wsgi_buffer_size: 16384

## Horizon
horizon_wsgi_processes: 1
horizon_wsgi_threads: 1
horizon_wsgi_threads_max: 2

## Ceilometer
ceilometer_notification_workers_max: 2
ceilometer_notification_workers: 1

## AODH
aodh_wsgi_threads: 1
aodh_wsgi_processes_max: 2
aodh_wsgi_processes: 1

## Gnocchi
gnocchi_wsgi_threads: 1
gnocchi_wsgi_processes_max: 2
gnocchi_wsgi_processes: 1
On Tue, Sep 18, 2018 at 9:23 AM Mohammed Naser <mnaser at vexxhost.com> wrote:
>
> Hi,
>
> 4GB of memory is not enough for a deployment unfortunately.
>
> You’ll have to bump it up.
>
> Thanks
> Mohammed
>
> Sent from my iPhone
>
> > On Sep 18, 2018, at 7:04 AM, Budai Laszlo <laszlo.budai at gmail.com> wrote:
> >
> > Hi,
> >
> > run dmesg on your deployment host. It should print which process has been evicted by the OOM killer.
> > We had similar issues with our deployment host. We had to increase its memory to 9G to have openstack-ansiblle working properly.
> > You should also monitor the memory usage of your processes on the controller/deployment host.
> >
> > good luck,
> > Laszlo
> >
> >> On 18.09.2018 13:43, Anirudh Gupta wrote:
> >> Hi Team,
> >> I am installing Open Stack Queens using the Openstack Ansible and facing some issues
> >> *System Configuration*
> >> *Controller/Deployment Host*
> >> RAM - 12 GB
> >> Hard disk - 100 GB
> >> Linux - Ubuntu 16.04
> >> Kernel Version - 4.4.0-135-generic
> >> *Compute*
> >> RAM - 4 GB
> >> Hard disk - 100 GB
> >> Linux - Ubuntu 16.04
> >> Kernel Version - 4.4.0-135-generic
> >> *Issue Observed:*
> >> When we run the below playbook
> >> openstack-ansible setup-openstack.yml
> >> *Error Observed:*
> >> After running for some duration, it throws the error of "Out of Memory Killing mysqld"
> >> In the "top" command, we see only haproxy processes and the system gets so slow that we are not even able to login into the system.
> >> Can you please help me in resolving the issue.
> >> Regards
> >> Anirudh Gupta
> >> _______________________________________________
> >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >> Post to     : openstack at lists.openstack.org
> >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
> >
> > _______________________________________________
> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



More information about the Openstack mailing list