[Openstack-operators] Openstack-Keystone error

Anwar Durrani durrani.anwar at gmail.com
Thu Jan 15 10:17:26 UTC 2015


​I did following steps earlier :

Juno -  Juno -  Juno -  Juno -  Juno -  Juno -  Juno -
 Chapter 2. Basic environment

*Contents*
Before you begin
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-prerequisites>
Security
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-security>
Networking
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking>Network
Time Protocol (NTP)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-ntp>OpenStack
packages
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-packages>
Database
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-database>Messaging
server
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-messaging-server>

This chapter explains how to configure each node in the example
architectures
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html#architecture_example-architectures>
including
the two-node architecture with legacy networking
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html#example-architecture-with-legacy-networking-hw>
 and three-node architecture with OpenStack Networking (neutron)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html#example-architecture-with-neutron-networking-hw>
.
[image: [Note]]Note

Although most environments include Identity, Image Service, Compute, at
least one networking service, and the dashboard, the Object Storage service
can operate independently. If your use case only involves Object Storage,
you can skip to Chapter 9, *Add Object Storage*
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_swift.html>
after
configuring the appropriate nodes for it. However, the dashboard requires
at least the Image Service and Compute.
[image: [Note]]Note

You must use an account with administrative privileges to configure each
node. Either run the commands as the root user or configure the sudo
 utility.
[image: [Note]]Note

The *systemctl enable* call on openSUSE outputs a warning message when the
service uses SysV Init scripts instead of native systemd files. This
warning can be ignored.
 Before you begin

For best performance, we recommend that your environment meets or exceeds
the hardware requirements in Figure 1.2, “Minimal architecture example with
OpenStack Networking (neutron)—Hardware requirements”
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html#example-architecture-with-neutron-networking-hw>
 or Figure 1.5, “Minimal architecture example with legacy networking
(nova-network)—Hardware requirements”
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html#example-architecture-with-legacy-networking-hw>.
However, OpenStack does not require a significant amount of resources and
the following minimum requirements should support a proof-of-concept
environment with core services and several CirrOS
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
 instances:

   -

   Controller Node: 1 processor, 2 GB memory, and 5 GB storage
   -

   Network Node: 1 processor, 512 MB memory, and 5 GB storage
   -

   Compute Node: 1 processor, 2 GB memory, and 10 GB storage

To minimize clutter and provide more resources for OpenStack, we recommend
a minimal installation of your Linux distribution. Also, we strongly
recommend that you install a 64-bit version of your distribution on at
least the compute node. If you install a 32-bit version of your
distribution on the compute node, attempting to start an instance using a
64-bit image will fail.
[image: [Note]]Note

A single disk partition on each node works for most basic installations.
However, you should consider Logical Volume Manager (LVM)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>for
installations with optional services such as Block Storage.

Many users build their test environments on virtual machines (VMs)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>.
The primary benefits of VMs include the following:

   -

   One physical server can support multiple nodes, each with almost any
   number of network interfaces.
   -

   Ability to take periodic "snap shots" throughout the installation
   process and "roll back" to a working configuration in the event of a
   problem.

However, VMs will reduce performance of your instances, particularly if
your hypervisor and/or processor lacks support for hardware acceleration of
nested VMs.
[image: [Note]]Note

If you choose to install on VMs, make sure your hypervisor permits promiscuous
mode
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
and
disables MAC address filtering on theexternal network
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
.

For more information about system requirements, see the OpenStack
Operations Guide <http://docs.openstack.org/ops/>.
 Security

OpenStack services support various security methods including password,
policy, and encryption. Additionally, supporting services including the
database server and message broker support at least password security.

To ease the installation process, this guide only covers password security
where applicable. You can create secure passwords manually, generate them
using a tool such as pwgen <http://sourceforge.net/projects/pwgen/>, or by
running the following command:

$ openssl rand -hex 10

For OpenStack services, this guide uses *SERVICE_PASS* to reference service
account passwords and *SERVICE_DBPASS* to reference database passwords.

The following table provides a list of services that require passwords and
their associated references in the guide:
Table 2.1. PasswordsPassword nameDescriptionDatabase password (no variable
used)Root password for the database*RABBIT_PASS*Password of user guest of
RabbitMQ*KEYSTONE_DBPASS*Database password of Identity
service*DEMO_PASS*Password
of user demo*ADMIN_PASS*Password of user admin*GLANCE_DBPASS*Database
password for Image Service*GLANCE_PASS*Password of Image Service user glance
*NOVA_DBPASS*Database password for Compute service*NOVA_PASS*Password of
Compute service user nova*DASH_DBPASS*Database password for the dashboard
*CINDER_DBPASS*Database password for the Block Storage
service*CINDER_PASS*Password
of Block Storage service user cinder*NEUTRON_DBPASS*Database password for
the Networking service*NEUTRON_PASS*Password of Networking service user
neutron*HEAT_DBPASS*Database password for the Orchestration service
*HEAT_PASS*Password of Orchestration service user
heat*CEILOMETER_DBPASS*Database
password for the Telemetry service*CEILOMETER_PASS*Password of Telemetry
service user ceilometer*TROVE_DBPASS*Database password of Database service
*TROVE_PASS*Password of Database Service user trove

OpenStack and supporting services require administrative privileges during
installation and operation. In some cases, services perform modifications
to the host that can interfere with deployment automation tools such as
Ansible, Chef, and Puppet. For example, some OpenStack services add a root
wrapper to sudo that can interfere with security policies. See the Cloud
Administrator Guide
<http://docs.openstack.org/admin-guide-cloud/content/root-wrap-reference.html>
for
more information. Also, the Networking service assumes default values for
kernel network parameters and modifies firewall rules. To avoid most issues
during your initial installation, we recommend using a stock deployment of
a supported distribution on your hosts. However, if you choose to automate
deployment of your hosts, review the configuration and policies applied to
them before proceeding further.
 Networking
OpenStack Networking (neutron)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking-neutron>Legacy
networking (nova-network)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking-nova>

After installing the operating system on each node for the architecture
that you choose to deploy, you must configure the network interfaces. We
recommend that you disable any automated network management tools and
manually edit the appropriate configuration files for your distribution.
For more information on how to configure networking on your distribution,
see the documentation.
<https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s1-networkscripts-interfaces.html>

RHEL and CentOS enable a restrictive firewall
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
by
default. During the installation process, certain steps will fail unless
you alter or disable the firewall. For more information about securing your
environment, refer to the OpenStack Security Guide
<http://docs.openstack.org/sec/>.

Proceed to network configuration for the example OpenStack Networking
(neutron)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking-neutron>
or legacy networking (nova-network)
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking-nova>
 architecture.
[image: [Note]]Note

All nodes require Internet access to install OpenStack packages and perform
maintenance tasks such as periodic updates. In most cases, nodes should
obtain Internet access through the management network interface. For
simplicity, the network diagrams in this guide only show Internet access
for OpenStack network services.
 OpenStack Networking (neutron)
Controller node
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-neutron-networking-controller-node>Network
node
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-neutron-networking-network-node>Compute
node
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-neutron-networking-compute-node>Verify
connectivity
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-neutron-networking-verify>

The example architecture with OpenStack Networking (neutron) requires one
controller node, one network node, and at least one compute node. The
controller node contains one network interface on the management network
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>.
The network node contains one network interface on the management network,
one on theinstance tunnels network
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>,
and one on the external network
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>.
The compute node contains one network interface on the management network
and one on the instance tunnels network.
[image: [Note]]Note

Network interface names vary by distribution. Traditionally, interfaces use
"eth" followed by a sequential number. To cover all variations, this guide
simply refers to the first interface as the interface with the lowest
number, the second interface as the interface with the middle number, and
the third interface as the interface with the highest number.


*Figure 2.1. Minimal architecture example with OpenStack Networking
(neutron)—Network layout*

Unless you intend to use the exact configuration provided in this example
architecture, you must modify the networks in this procedure to match your
environment. Also, each node must resolve the other nodes by name in
addition to IP address. For example, the *controller* name must resolve to
10.0.0.11, the IP address of the management interface on the controller
node.
[image: [Warning]]Warning

Reconfiguring network interfaces will interrupt network connectivity. We
recommend using a local terminal session for these procedures.
 Controller node


*To configure networking:*

   1.

   Configure the first interface as the management interface:

   IP address: 10.0.0.11

   Network mask: 255.255.255.0 (or /24)

   Default gateway: 10.0.0.1
   2.

   Reboot the system to activate the changes.



*To configure name resolution:*

   1.

   Set the hostname of the node to *controller*.
   2.

   Edit the /etc/hosts file to contain the following:

   # controller
   10.0.0.11       controller

   # network
   10.0.0.21       network

   # compute1
   10.0.0.31       compute1


 Network node


*To configure networking:*

   1.

   Configure the first interface as the management interface:

   IP address: 10.0.0.21

   Network mask: 255.255.255.0 (or /24)

   Default gateway: 10.0.0.1
   2.

   Configure the second interface as the instance tunnels interface:

   IP address: 10.0.1.21

   Network mask: 255.255.255.0 (or /24)
   3.

   The external interface uses a special configuration without an IP
   address assigned to it. Configure the third interface as the external
   interface:

   Replace *INTERFACE_NAME* with the actual interface name. For example,
   *eth2* or *ens256*.
   1.

      Edit the /etc/sysconfig/network-scripts/ifcfg-*INTERFACE_NAME* file
      to contain the following:

      Do not change the HWADDR and UUID keys.

      DEVICE=*INTERFACE_NAME*
      TYPE=Ethernet
      ONBOOT="yes"
      BOOTPROTO="none"

      4.

   Reboot the system to activate the changes.



*To configure name resolution:*

   1.

   Set the hostname of the node to network.
   2.

   Edit the /etc/hosts file to contain the following:

   # network
   10.0.0.21       network

   # controller
   10.0.0.11       controller

   # compute1
   10.0.0.31       compute1


 Compute node


*To configure networking:*

   1.

   Configure the first interface as the management interface:

   IP address: 10.0.0.31

   Network mask: 255.255.255.0 (or /24)

   Default gateway: 10.0.0.1
   [image: [Note]]Note

   Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
   2.

   Configure the second interface as the instance tunnels interface:

   IP address: 10.0.1.31

   Network mask: 255.255.255.0 (or /24)
   [image: [Note]]Note

   Additional compute nodes should use 10.0.1.32, 10.0.1.33, and so on.
   3.

   Reboot the system to activate the changes.



*To configure name resolution:*

   1.

   Set the hostname of the node to compute1.
   2.

   Edit the /etc/hosts file to contain the following:

   # compute1
   10.0.0.31       compute1

   # controller
   10.0.0.11       controller

   # network
   10.0.0.21       network


 Verify connectivity

We recommend that you verify network connectivity to the Internet and among
the nodes before proceeding further.

   1.

   From the *controller* node, *ping* a site on the Internet:

   # ping -c 4 openstack.org
   PING openstack.org (174.143.194.225) 56(84) bytes of data.
   64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
   64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms

   --- openstack.org ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3022ms
   rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms

   2.

   From the *controller* node, *ping* the management interface on the
   *network* node:

   # ping -c 4 *network*
   PING network (10.0.0.21) 56(84) bytes of data.
   64 bytes from network (10.0.0.21): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from network (10.0.0.21): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from network (10.0.0.21): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from network (10.0.0.21): icmp_seq=4 ttl=64 time=0.202 ms

   --- network ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

   3.

   From the *controller* node, *ping* the management interface on the
   *compute* node:

   # ping -c 4 *compute1*
   PING compute1 (10.0.0.31) 56(84) bytes of data.
   64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms

   --- network ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

   4.

   From the *network* node, *ping* a site on the Internet:

   # ping -c 4 openstack.org
   PING openstack.org (174.143.194.225) 56(84) bytes of data.
   64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
   64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms

   --- openstack.org ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3022ms
   rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms

   5.

   From the *network* node, *ping* the management interface on the
   *controller* node:

   # ping -c 4 *controller*
   PING controller (10.0.0.11) 56(84) bytes of data.
   64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms

   --- controller ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

   6.

   From the *network* node, *ping* the instance tunnels interface on the
   *compute* node:

   # ping -c 4 10.0.1.31
   PING 10.0.1.31 (10.0.1.31) 56(84) bytes of data.
   64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=4 ttl=64 time=0.202 ms

   --- 10.0.1.31 ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

   7.

   From the *compute* node, *ping* a site on the Internet:

   # ping -c 4 openstack.org
   PING openstack.org (174.143.194.225) 56(84) bytes of data.
   64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
   64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms

   --- openstack.org ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3022ms
   rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms

   8.

   From the *compute* node, *ping* the management interface on the
   *controller* node:

   # ping -c 4 *controller*
   PING controller (10.0.0.11) 56(84) bytes of data.
   64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms

   --- controller ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

   9.

   From the *compute* node, *ping* the instance tunnels interface on the
   *network* node:

   # ping -c 4 10.0.1.21
   PING 10.0.1.21 (10.0.1.21) 56(84) bytes of data.
   64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=4 ttl=64 time=0.202 ms

   --- 10.0.1.21 ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms


 Legacy networking (nova-network)
Controller node
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking-nova-controller-node>Compute
node
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking-node-compute-node>Verify
connectivity
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-networking-nova-verify>

The example architecture with legacy networking (nova-network) requires a
controller node and at least one compute node. The controller node contains
one network interface on the management network
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>.
The compute node contains one network interface on the management network
and one on the external network
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
.
[image: [Note]]Note

Network interface names vary by distribution. Traditionally, interfaces use
"eth" followed by a sequential number. To cover all variations, this guide
simply refers to the first interface as the interface with the lowest
number and the second interface as the interface with the highest number.


*Figure 2.2. Minimal architecture example with legacy networking
(nova-network)—Network layout*

Unless you intend to use the exact configuration provided in this example
architecture, you must modify the networks in this procedure to match your
environment. Also, each node must resolve the other nodes by name in
addition to IP address. For example, the *controller* name must resolve to
10.0.0.11, the IP address of the management interface on the controller
node.
[image: [Warning]]Warning

Reconfiguring network interfaces will interrupt network connectivity. We
recommend using a local terminal session for these procedures.
 Controller node


*To configure networking:*

   1.

   Configure the first interface as the management interface:

   IP address: 10.0.0.11

   Network mask: 255.255.255.0 (or /24)

   Default gateway: 10.0.0.1
   2.

   Reboot the system to activate the changes.



*To configure name resolution:*

   1.

   Set the hostname of the node to *controller*.
   2.

   Edit the /etc/hosts file to contain the following:

   # controller
   10.0.0.11       controller

   # compute1
   10.0.0.31       compute1


 Compute node


*To configure networking:*

   1.

   Configure the first interface as the management interface:

   IP address: 10.0.0.31

   Network mask: 255.255.255.0 (or /24)

   Default gateway: 10.0.0.1
   [image: [Note]]Note

   Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
   2.

   The external interface uses a special configuration without an IP
   address assigned to it. Configure the second interface as the external
   interface:

   Replace *INTERFACE_NAME* with the actual interface name. For example,
   *eth1* or *ens224*.
   1.

      Edit the /etc/sysconfig/network-scripts/ifcfg-*INTERFACE_NAME* file
      to contain the following:

      Do not change the HWADDR and UUID keys.

      DEVICE=*INTERFACE_NAME*
      TYPE=Ethernet
      ONBOOT="yes"
      BOOTPROTO="none"

      3.

   Reboot the system to activate the changes.



*To configure name resolution:*

   1.

   Set the hostname of the node to compute1.
   2.

   Edit the /etc/hosts file to contain the following:

   # compute1
   10.0.0.31       compute1

   # controller
   10.0.0.11       controller


 Verify connectivity

We recommend that you verify network connectivity to the Internet and among
the nodes before proceeding further.

   1.

   From the *controller* node, *ping* a site on the Internet:

   # ping -c 4 openstack.org
   PING openstack.org (174.143.194.225) 56(84) bytes of data.
   64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
   64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms

   --- openstack.org ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3022ms
   rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms

   2.

   From the *controller* node, *ping* the management interface on the
   *compute* node:

   # ping -c 4 *compute1*
   PING compute1 (10.0.0.31) 56(84) bytes of data.
   64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms

   --- compute1 ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

   3.

   From the *compute* node, *ping* a site on the Internet:

   # ping -c 4 openstack.org
   PING openstack.org (174.143.194.225) 56(84) bytes of data.
   64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
   64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
   64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms

   --- openstack.org ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3022ms
   rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms

   4.

   From the *compute* node, *ping* the management interface on the
   *controller* node:

   # ping -c 4 *controller*
   PING controller (10.0.0.11) 56(84) bytes of data.
   64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
   64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
   64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
   64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms

   --- controller ping statistics ---
   4 packets transmitted, 4 received, 0% packet loss, time 3000ms
   rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms


 Network Time Protocol (NTP)
Controller node
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-ntp-controller-node>Other
nodes
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-ntp-other-nodes>Verify
operation
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#basics-ntp-verify>

You must install NTP
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
to
properly synchronize services among nodes. We recommend that you configure
the controller node to reference more accurate (lower stratum) servers and
other nodes to reference the controller node.
 Controller node


*To install the NTP service*

   -

   # yum install ntp




*To configure the NTP service*

By default, the controller node synchronizes the time via a pool of public
servers. However, you can optionally edit the /etc/ntp.conf file to
configure alternative servers such as those provided by your organization.

   1.

   Edit the /etc/ntp.conf file and add, change, or remove the following
   keys as necessary for your environment:
   [image: Select Text]
   1
   2
   3
   server NTP_SERVER iburst
   restrict -4 default kod notrap nomodify
   restrict -6 default kod notrap nomodify

   Replace *NTP_SERVER* with the hostname or IP address of a suitable more
   accurate (lower stratum) NTP server. The configuration supports multiple
   serverkeys.
   [image: [Note]]Note

   For the restrict keys, you essentially remove the nopeer and noquery
    options.
   2.

   Start the NTP service and configure it to start when the system boots:

   # systemctl enable ntpd.service
   # systemctl start ntpd.service


 Other nodes


*To install the NTP service*

   -

   # yum install ntp




*To configure the NTP service*

Configure the network and compute nodes to reference the controller node.

   1.

   Edit the /etc/ntp.conf file:

   Comment out or remove all but one server key and change it to reference
   the controller node.
   [image: Select Text]
   1
   server controller iburst
   2.

   Start the NTP service and configure it to start when the system boots:

   # systemctl enable ntpd.service
   # systemctl start ntpd.service


 Verify operation

We recommend that you verify NTP synchronization before proceeding further.
Some nodes, particularly those that reference the controller node, can take
several minutes to synchronize.

   1.

   Run this command on the *controller* node:

   # ntpq -c peers
        remote           refid      st t when poll reach   delay
offset  jitter
   ==============================================================================
   *ntp-server1     192.0.2.11       2 u  169 1024  377    1.901
-0.611   5.483
   +ntp-server2     192.0.2.12       2 u  887 1024  377    0.922
-0.246   2.864

   Contents in the *remote* column should indicate the hostname or IP
   address of one or more NTP servers.
   [image: [Note]]Note

   Contents in the *refid* column typically reference IP addresses of
   upstream servers.
   2.

   Run this command on the *controller* node:

   # ntpq -c assoc
   ind assid status  conf reach auth condition  last_event cnt
   ===========================================================
     1 20487  961a   yes   yes  none  sys.peer    sys_peer  1
     2 20488  941a   yes   yes  none candidate    sys_peer  1

   Contents in the *condition* column should indicate sys.peer for at least
   one server.
   3.

   Run this command on *all other* nodes:

   # ntpq -c peers
        remote           refid      st t when poll reach   delay
offset  jitter
   ==============================================================================
   *controller      192.0.2.21       3 u   47   64   37    0.308
-0.251   0.079

   Contents in the *remote* column should indicate the hostname of the
   controller node.
   [image: [Note]]Note

   Contents in the *refid* column typically reference IP addresses of
   upstream servers.
   4.

   Run this command on *all other* nodes:

   # ntpq -c assoc
   ind assid status  conf reach auth condition  last_event cnt
   ===========================================================
     1 21181  963a   yes   yes  none  sys.peer    sys_peer  3

   Contents in the *condition* column should indicate sys.peer.

 OpenStack packages

Distributions release OpenStack packages as part of the distribution or
using other methods because of differing release schedules. Perform these
procedures on all nodes.
[image: [Note]]Note

Disable or remove any automatic update services because they can impact
your OpenStack environment.


*To configure prerequisites*

   1.

   Install the yum-plugin-priorities package to enable assignment of
   relative priorities within repositories:

   # yum install yum-plugin-priorities

   2.

   Install the epel-release package to enable the EPEL
   <http://download.fedoraproject.org/pub/epel/7/x86_64/repoview/epel-release.html>
    repository:

   # yum install
http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

   [image: [Note]]Note

   Fedora does not require this package.



*To enable the OpenStack repository*

   -

   Install the rdo-release-juno package to enable the RDO repository:

   # yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm




*To finalize installation*

   1.

   Upgrade the packages on your system:

   # yum upgrade

   [image: [Note]]Note

   If the upgrade process includes a new kernel, reboot your system to
   activate it.
   2.

   RHEL and CentOS enable SELinux
   <http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
by
   default. Install the openstack-selinux package to automatically manage
   security policies for OpenStack services:

   # yum install openstack-selinux

   [image: [Note]]Note

   Fedora does not require this package.
   [image: [Note]]Note

   The installation process for this package can take a while.

 Database

Most OpenStack services use an SQL database to store information. The
database typically runs on the controller node. The procedures in this
guide use MariaDBor MySQL depending on the distribution. OpenStack services
also support other SQL databases including PostgreSQL
<http://www.postgresql.org/>.


*To install and configure the database server*

   1.

   Install the packages:
   [image: [Note]]Note

   The Python MySQL library is compatible with MariaDB.

   # yum install mariadb mariadb-server MySQL-python

   2.

   Edit the /etc/my.cnf file and complete the following actions:
   1.

      In the [mysqld] section, set the bind-address key to the management
      IP address of the controller node to enable access by other nodes via the
      management network:
      [image: Select Text]
      1
      2
      3
      [mysqld]
      ...
      bind-address = 10.0.0.11
      2.

      In the [mysqld] section, set the following keys to enable useful
      options and the UTF-8 character set:
      [image: Select Text]
      1
      2
      3
      4
      5
      6
      7
      [mysqld]
      ...
      default-storage-engine = innodb
      innodb_file_per_table
      collation-server = utf8_general_ci
      init-connect = 'SET NAMES utf8'
      character-set-server = utf8



*To finalize installation*

   1.

   Start the database service and configure it to start when the system
   boots:

   # systemctl enable mariadb.service
   # systemctl start mariadb.service

   2.

   Secure the database service including choosing a suitable password for
   the root account:

   # mysql_secure_installation


 Messaging server

OpenStack uses a message broker
<http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html#>
to
coordinate operations and status information among services. The message
broker service typically runs on the controller node. OpenStack supports
several message brokers including RabbitMQ, Qpid, and ZeroMQ. However, most
distributions that package OpenStack support a particular message broker.
This guide covers the RabbitMQ message broker which is supported by each
distribution. If you prefer to implement a different message broker,
consult the documentation associated with it.

   -

   RabbitMQ <http://www.rabbitmq.com/>
   -

   Qpid <http://qpid.apache.org/>
   -

   ZeroMQ <http://zeromq.org/>



*To install the RabbitMQ message broker service*

   -

   # yum install rabbitmq-server




*To configure the message broker service*

   1.

   Start the message broker service and configure it to start when the
   system boots:

   # systemctl enable rabbitmq-server.service
   # systemctl start rabbitmq-server.service

   2.

   The message broker creates a default account that uses guest for the
   username and password. To simplify installation of your test environment,
   we recommend that you use this account, but change the password for it.

   Run the following command:

   Replace *RABBIT_PASS* with a suitable password.

   # rabbitmqctl change_password guest *RABBIT_PASS*
   Changing password for user "guest" ...
   ...done.


------------------------------x------------------x----------------x------------------------x
I have followed the instruction as above when i try to run command and try
to start  rabbitmq-server.service service

systemctl start rabbitmq-server.service

*Status of error via *systemctl status rabbitmq-server.service *Then i am
getting below error :*

rabbitmq-server.service - RabbitMQ broker
   Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled)
   Active: failed (Result: exit-code) since Thu 2015-01-15 02:10:13 PST;
54s ago
  Process: 54662 ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl stop
(code=exited, status=2)
  Process: 54632 ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server
(code=exited, status=1/FAILURE)
 Main PID: 54632 (code=exited, status=1/FAILURE)

Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: attempted to
contact: [rabbit at controller]
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: rabbit at controller
:
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: * connected to
epmd (port 4369) on controller
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: * epmd reports:
node 'rabbit' not running at all
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: no other nodes
on controller
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: * suggestion:
start the node
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: current node
details:
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: - node name:
rabbitmqctl54662 at controller
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: - home dir:
/var/lib/rabbitmq
Jan 15 02:10:13 controller.example.com rabbitmqctl[54662]: - cookie hash:
+v6hxGqtuK6mBTgxoOt3hA==
Jan 15 02:10:13 controller.example.com systemd[1]: rabbitmq-server.service:
control process exited, code=exited status=2
Jan 15 02:10:13 controller.example.com systemd[1]: Failed to start RabbitMQ
broker.
Jan 15 02:10:13 controller.example.com systemd[1]: Unit
rabbitmq-server.service entered failed state.
[root at localhost ~]#

What do you mean by migration ? does it mean to repeat steps again ?



On Thu, Jan 15, 2015 at 3:28 PM, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

> this is probably the issue then:
>
> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi ProgrammingError:
> (ProgrammingError) (1146, "Table 'keystone.token' doesn't exist") 'SELECT
> token.idAS token_id, token.expires AS token_expires, token.extra AS
> token_extra, token.valid AS token_valid, token.user_id AS token_user_id,
> token.trust_id AS token_trust_id \nFROM token \nWHERE token.id = %s'
> ('2c0dc0032d675623f37a',)
>
>
> you may need to run those db migrate scripts for keystone first ....
>
> alex
>
>
> On Thu Jan 15 2015 at 09:09:57 Anwar Durrani <durrani.anwar at gmail.com>
> wrote:
>
>> Hi Alex, below is error in log file
>>
>> 2015-01-15 01:08:34.128 50243 ERROR keystone.common.wsgi [-]
>> (ProgrammingError) (1146, "Table 'keystone.token' doesn't exist") 'SELECT
>> token.id AS token_id, token.expires AS token_expires, token.extra AS
>> token_extra, token.valid AS token_valid, token.user_id AS token_user_id,
>> token.trust_id AS token_trust_id \nFROM token \nWHERE token.id = %s'
>> ('2c0dc0032d675623f37a',)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi Traceback (most
>> recent call last):
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 430, in
>> __call__
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     response =
>> self.process_request(request)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/middleware/core.py", line 279,
>> in process_request
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     auth_context
>> = self._build_auth_context(request)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/middleware/core.py", line 259,
>> in _build_auth_context
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> token_data=self.token_provider_api.validate_token(token_id))
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/token/provider.py", line 225, in
>> validate_token
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     token =
>> self._validate_token(unique_id)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1013, in
>> decorate
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> should_cache_fn)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 640, in
>> get_or_create
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> async_creator) as value:
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 158, in
>> __enter__
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> self._enter()
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 98, in
>> _enter
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     generated =
>> self._enter_create(createdtime)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 149, in
>> _enter_create
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     created =
>> self.creator()
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 612, in
>> gen_value
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> created_value = creator()
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1009, in
>> creator
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> fn(*arg, **kw)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/token/provider.py", line 318, in
>> _validate_token
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     token_ref =
>> self._persistence.get_token(token_id)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/token/persistence/core.py", line
>> 76, in get_token
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     token_ref =
>> self._get_token(unique_id)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1013, in
>> decorate
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> should_cache_fn)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 640, in
>> get_or_create
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> async_creator) as value:
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 158, in
>> __enter__
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> self._enter()
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 98, in
>> _enter
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     generated =
>> self._enter_create(createdtime)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 149, in
>> _enter_create
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     created =
>> self.creator()
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 612, in
>> gen_value
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> created_value = creator()
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1009, in
>> creator
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> fn(*arg, **kw)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/token/persistence/core.py", line
>> 88, in _get_token
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> self.driver.get_token(token_id)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib/python2.7/site-packages/keystone/token/persistence/backends/sql.py",
>> line 92, in get_token
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     token_ref =
>> session.query(TokenModel).get(token_id)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 820, in
>> get
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> loading.load_on_ident(self, key)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 231,
>> in load_on_ident
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> q.one()
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2369, in
>> one
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     ret =
>> list(self)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2412, in
>> __iter__
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> self._execute_and_instances(context)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2427, in
>> _execute_and_instances
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     result =
>> conn.execute(querycontext.statement, self._params)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729,
>> in execute
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> meth(self, multiparams, params)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 321,
>> in _execute_on_connection
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     return
>> connection._execute_clauseelement(self, multiparams, params)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826,
>> in _execute_clauseelement
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> compiled_sql, distilled_params
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958,
>> in _execute_context
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     context)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1156,
>> in _handle_dbapi_exception
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> util.raise_from_cause(newraise, exc_info)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199,
>> in raise_from_cause
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> reraise(type(exception), exception, tb=exc_tb)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951,
>> in _execute_context
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     context)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line
>> 436, in do_execute
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> cursor.execute(statement, parameters)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 174, in
>> execute
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> self.errorhandler(self, exc, value)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi   File
>> "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in
>> defaulterrorhandler
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi     raise
>> errorclass, errorvalue
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> ProgrammingError: (ProgrammingError) (1146, "Table 'keystone.token' doesn't
>> exist") 'SELECT token.id AS token_id, token.expires AS token_expires,
>> token.extra AS token_extra, token.valid AS token_valid, token.user_id AS
>> token_user_id, token.trust_id AS token_trust_id \nFROM token \nWHERE
>> token.id = %s' ('2c0dc0032d675623f37a',)
>> 2015-01-15 01:08:34.128 50243 TRACE keystone.common.wsgi
>> 2015-01-15 01:08:34.131 50243 INFO eventlet.wsgi.server [-] 192.168.0.200
>> - - [15/Jan/2015 01:08:34] "POST /v2.0/tenants HTTP/1.1" 500 291 0.020139
>>
>>
>> On Thu, Jan 15, 2015 at 2:04 PM, Alex Leonhardt <aleonhardt.py at gmail.com>
>> wrote:
>>
>>> I don't think anyone should try to install OS manually :) .. But check
>>> the keystone logs for what caused the 500? Maybe the admin tenant/project
>>> already exists?
>>>
>>> On Thu, 15 Jan 2015 08:29 Anwar Durrani <durrani.anwar at gmail.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> i am getting below error while running below command
>>>>
>>>> [root at localhost ~]# keystone tenant-create --name admin --description
>>>> "Admin Tenant"
>>>> An unexpected error prevented the server from fulfilling your request.
>>>> (HTTP 500)
>>>> [root at localhost ~]#
>>>>
>>>> Prior to run this command i have done following :
>>>>
>>>> *o -  Juno -  Juno -  Juno -  Juno -  Juno -  Juno -  Juno - *
>>>>
>>>> * Create tenants, users, and roles*
>>>>
>>>> After you install the Identity service, create tenants (projects),
>>>> users, and roles for your environment. You must use the temporary
>>>> administration token that you created in the section called “Install
>>>> and configure”
>>>> <http://docs.openstack.org/juno/install-guide/install/yum/content/keystone-install.html>
>>>> and manually configure the location (endpoint) of the Identity service
>>>> before you run *keystone* commands.
>>>>
>>>> You can pass the value of the administration token to the *keystone*
>>>> command with the --os-token option or set the temporary
>>>> OS_SERVICE_TOKEN environment variable. Similarly, you can pass the
>>>> location of the Identity service to the *keystone* command with the
>>>> --os-endpoint option or set the temporary OS_SERVICE_ENDPOINT
>>>> environment variable. This guide uses environment variables to reduce
>>>> command length.
>>>>
>>>> For more information, see the Operations Guide - Managing Project and
>>>> Users
>>>> <http://docs.openstack.org/openstack-ops/content/projects_users.html>.
>>>>
>>>>
>>>>
>>>> *To configure prerequisites*
>>>>
>>>>    1. Configure the administration token:
>>>>    $ export OS_SERVICE_TOKEN=1dd717043ad277e29edb
>>>>    $ export OS_SERVICE_TOKEN=294a4c8a8a475f9b9836
>>>>    2. Configure the endpoint:
>>>>    $ export OS_SERVICE_ENDPOINT=http://*controller*:35357/v2.0
>>>>
>>>>
>>>> ​Please advise, how to fix this issue ?
>>>>
>>>> Thanks​
>>>>
>>>> --
>>>> Thanks & regards,
>>>> Anwar M. Durrani
>>>> +91-8605010721
>>>> <http://in.linkedin.com/pub/anwar-durrani/20/b55/60b>
>>>>
>>>>
>>>>  _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>
>>
>>
>> --
>> Thanks & regards,
>> Anwar M. Durrani
>> +91-8605010721
>> <http://in.linkedin.com/pub/anwar-durrani/20/b55/60b>
>>
>>
>>


-- 
Thanks & regards,
Anwar M. Durrani
+91-8605010721
<http://in.linkedin.com/pub/anwar-durrani/20/b55/60b>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150115/293eccb0/attachment-0001.html>


More information about the OpenStack-operators mailing list