[Openstack] ImportError

somshekar kadam som_kadam at yahoo.co.in
Thu Feb 26 11:56:06 UTC 2015


Hello All, 

When I fire ./stack.sh on compute node 
I facing this error, not sure any suggestions to overcome this. 
both logs of stack.sh.log and c-vol.log given below. 

logs
--
2015-02-26 06:59:32.382 | + screen -S stack -p c-vol -X stuff '/usr/local/bin/cinder-volume --config-file /etc/cinder/cinder.conf & echo $! >/o't/stack/status/stack/c-vol.pid; fg || echo "c-vol failed to start" | tee "/opt/stack/status/stack/c-vol.failure"
2015-02-26 06:59:32.388 | + is_service_enabled c-api
2015-02-26 06:59:32.394 | + return 0
2015-02-26 06:59:32.394 | + is_service_enabled tls-proxy
2015-02-26 06:59:32.397 | + return 1
2015-02-26 06:59:32.398 | + create_volume_types
2015-02-26 06:59:32.398 | + is_service_enabled c-api
2015-02-26 06:59:32.400 | + return 0
2015-02-26 06:59:32.401 | + [[ -n lvm:lvmdriver-1 ]]
2015-02-26 06:59:32.401 | + local be be_name be_type
2015-02-26 06:59:32.401 | + for be in '${CINDER_ENABLED_BACKENDS//,/ }'
2015-02-26 06:59:32.401 | + be_type=lvm
2015-02-26 06:59:32.401 | + be_name=lvmdriver-1
2015-02-26 06:59:32.401 | + cinder type-create lvmdriver-1
2015-02-26 06:59:33.162 | ERROR: Conflict (HTTP 409) (Request-ID: req-315b5f68-5dcc-404c-9e8f-91a279573b27)
2015-02-26 06:59:33.192 | ++ err_trap
2015-02-26 06:59:33.192 | ++ local r=1
2015-02-26 06:59:33.193 | stack.sh failed: full log in /opt/stack1/logs/stack.sh.log.2015-02-26-122225
2015-02-26 06:59:33.194 | Error on exit
-------------------------------


log of c-vol.log

---
ailed to start" | tee "/opt/stack/status/stack/c-vol.failure"fg || echo "c-vol f 
[1] 20073
/usr/local/bin/cinder-volume --config-file /etc/cinder/cinder.conf
Traceback (most recent call last):
  File "/usr/local/bin/cinder-volume", line 9, in <module>
    load_entry_point('cinder==2015.1.dev143', 'console_scripts', 'cinder-volume')()
  File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 521, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2632, in load_entry_point
    return ep.load()
  File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2312, in load
    return self.resolve()
  File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2318, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/opt/stack/cinder/cinder/cmd/volume.py", line 36, in <module>
    from oslo_config import cfg
  File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 334, in <module>
    from oslo.config import types
ImportError: cannot import name types
c-vol failed to start
----------------



RegardsSomshekar C Kadam9036660538 
Regards
Neelu 

     On Wednesday, 25 February 2015 11:10 PM, Raghavendra Lad <lad.raghavendra at gmail.com> wrote:
   

 Hi ,

Please let me know if we can go ahead and create cells.
I have the API (parent) and childcell1 and childcell2. I followed open stack doc.

I have the rabbit settings and nova-manage commands on parent it works fine. On the child cells it gives 
Operational error insert cells error. Any thoughts, help would be appreciated. 

The regions write up is very helpful.

Regards,
Raghavendra Lad
 

On Sun, Nov 30, 2014 at 3:57 AM, Joe Topjian <joe at topjian.net> wrote:

Hello,
Regions can be a little confusing because of their ambiguity.
Regions are really nothing more than a tag you give an endpoint in the Identity catalog. Depending on how you use that tag determines how regions are used in your environment. Here are a few scenarios:
(IMO, using the Keystone templated catalog makes things easier to understand)
Let's say you have one Identity service hosted on one server. In the catalog, you create two Identity entries that are identical (same endpoints), except one is for RegionOne and the other is for RegionTwo. You've now just created two Regions in OpenStack and these regions will share the same user database.
You can then proceed to add your other services to the catalog and tag them with whichever region they should be available in. Any other service that is added twice, has the same endpoints, but different region, will be the same service shared between both regions. Take the Image (glance) service for example. If you specify it twice just as you did with the Identity service, then your image catalog will be shared between regions.
You could very well add all services with the same endpoints twice, but in reality you now have just one region. So to effectively use regions, you need to balance what's shared and what's not depending on your use-case.
As an end-user, to specify the region you want to use, either set the OS_REGION_NAME environment variable for the command line tools, or choose the region in Horizon (more on this later).
Now here's another example:
Let's say you have two Identity services hosted on two different servers. One for each region. These two servers share the same Keystone MySQL database. In the catalog of each server, you specify only one set of services: the services for that region. Because the database is shared, you're still effectively sharing the same users in each region, but because the endpoints are different for the Identity service, end-users will need to specify both OS_REGION_NAME and OS_AUTH_URL on the command line.
There's a third example that is a hybrid of the two above examples: multiple Identity servers that each host all catalog entries for all regions. I personally have not tested this scenario yet, so I can't comment on it.
The reason you would consider multiple Identity servers is for high availability. If you are using regions as a way to divide your cloud into distinct physical areas (different data centres perhaps) and the data centre where the central Identity server goes offline, then users could not log into any region.
On the topic of sharing user, tenants, and what's easiest:
You could create a separate "services" tenant for each region (services_regionone, services_regiontwo), but I believe Jay was saying that it's easiest to just use the same "services" tenant across all regions. I concur. In nova.conf, glance-*.conf, cinder.conf, etc, you do not need to add any region info to the keystone auth settings. I believe a region_name setting exists, but I've never had to use it.
Now for Horizon:
Historically, Horizon did not work well when the catalog contained entries for multiple regions (another reason to use the separate server scenario described above). Fortunately that's a thing of the past.
Presently, if your catalog contains entries for all of your regions, you do not need to do any special configuration of Horizon. A user logs in, and once logged in, they can choose what region to work in. Even better, they will be transferred to that region without having to re-authenticate. This is a wonderful user experience.
If your regions are contained in multiple catalogs (perhaps because you have been using regions before Horizon fixed this :), you must specify each region in the AVAILABLE_REGIONS setting of local_settings.py. When a user first visits Horizon, they will enter their username, password, and choose a region. When they log in and want to switch regions, they choose the region from a drop-down, but will then be asked to re-authenticate.
This is the extent of my knowledge of Regions. I hope it helped clarify some areas. And if anyone has anything additional to add or correct, I'd love to hear as I use regions extensively.
Thanks,Joe


On Fri, Nov 28, 2014 at 10:15 PM, Chris <contact at progbau.de> wrote:

Hi,

what exactly is easier to use? When I use the same tenants but specify the different regions during the endpoint creation, is it enough?

Second additional question, is the separation in the different regions just for the endpoints or does the "noca.conf" and other configuration files need to be changed as well?

Unfortunately I couldn't find any manual how to proper configure a second regions, just a bug report, reporting the lack of documentation :) https://bugs.launchpad.net/openstack-manuals/+bug/1340509

Cheers
Chris

On 2014-11-28 20:54, Jay Pipes wrote:

On 11/28/2014 06:40 AM, Chris wrote:

Hello Robert,

thx for your answer! Does we need to create new admin/service tenants
for the new services in the new region or should we use the old ones?


It's much easier to use the same ones, in my experience.

Best,
-jay

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack at lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to    : openstack at lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150226/0e4cc4d2/attachment.html>


More information about the Openstack mailing list