[Openstack] Cinder error

Marcelo Dieder marcelodieder at gmail.com
Mon Sep 23 20:12:37 UTC 2013


Hi Guilherme,

The RabbitMQ has virtual host for many applications. Default, the Rabbit 
created a default virtualhost with /.

You can see this with the command:

root at controller:~# rabbitmqctl list_permissions
Listing permissions in vhost "/" ...
guest    .*    .*    .*
...done.

Regards,

Marcelo Dieder

On 09/23/2013 04:57 PM, Guilherme Russi wrote:
> I guess I've got something:
>
> 2013-09-23 16:52:17     INFO [cinder.openstack.common.rpc.common] 
> Connected to AMQP server on localhost:5672
>
> I've found this page 
> https://ask.openstack.org/en/question/4581/cinder-unable-to-connect-to-rabbitmq/ and zipmaster07 
> answered "rabbit_virtual_host = /nova
>
> I commented out the "rabbit_virtual_host", restarted all cinder 
> services and I can see a successful connection to AMQP now."
>
> And I did that, now it's connected, but what is this 
> rabbit_virtual_host? What does it do?
> I'll test my volumes now.
>
> Regards.
>
>
>
>
>
> 2013/9/23 Guilherme Russi <luisguilherme.cr at gmail.com 
> <mailto:luisguilherme.cr at gmail.com>>
>
>     I've looked at the quantum/server.log and nova-scheduler.log and
>     they show:
>     2013-09-23 16:25:27     INFO [quantum.openstack.common.rpc.common]
>     Reconnecting to AMQP server on localhost:5672
>     2013-09-23 16:25:27     INFO [quantum.openstack.common.rpc.common]
>     Connected to AMQP server on localhost:5672
>
>     2013-09-23 16:24:01.830 5971 INFO nova.openstack.common.rpc.common
>     [-] Reconnecting to AMQP server on 127.0.0.1:5672
>     <http://127.0.0.1:5672>
>     2013-09-23 16:24:01.879 5971 INFO nova.openstack.common.rpc.common
>     [-] Connected to AMQP server on 127.0.0.1:5672 <http://127.0.0.1:5672>
>
>     But at the cinder-volume.log:
>
>     INFO [cinder.openstack.common.rpc.common] Reconnecting to AMQP
>     server on localhost:5672
>     2013-09-23 16:46:04    ERROR [cinder.openstack.common.rpc.common]
>     AMQP server on localhost:5672 is unreachable: Socket closed.
>     Trying again in 30 seconds.
>
>
>     I was typing when you sent your answer, here is it:
>
>     rabbitmq-server status
>     Status of node rabbit at hemera ...
>     [{pid,17266},
>      {running_applications,[{rabbit,"RabbitMQ","2.7.1"},
>                             {os_mon,"CPO  CXC 138 46","2.2.7"},
>                             {sasl,"SASL  CXC 138 11","2.1.10"},
>                             {mnesia,"MNESIA  CXC 138 12","4.5"},
>                             {stdlib,"ERTS  CXC 138 10","1.17.5"},
>                             {kernel,"ERTS  CXC 138 10","2.14.5"}]},
>      {os,{unix,linux}},
>      {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit]
>     [smp:4:4] [rq:4] [async-threads:30] [kernel-poll:true]\n"},
>      {memory,[{total,30926120},
>               {processes,14354392},
>               {processes_used,14343184},
>               {system,16571728},
>               {atom,1124441},
>               {atom_used,1120343},
>               {binary,268176},
>               {code,11134417},
>               {ets,2037120}]},
>      {vm_memory_high_watermark,0.4},
>      {vm_memory_limit,3299385344 <tel:3299385344>}]
>     ...done.
>
>
>     Yes, I've restarted the rabbitmq-server, but as you can see at the
>     logs, quantum and nova are connected.
>
>     Ideas??
>
>     Regards.
>
>
>
>     2013/9/23 Marcelo Dieder <marcelodieder at gmail.com
>     <mailto:marcelodieder at gmail.com>>
>
>         What's the status of your rabbitmq?
>
>         # rabbitmqctl status
>
>         And do you tried restart the rabbitmq?
>
>         Regards,
>         Marcelo Dieder
>
>
>         On 09/23/2013 03:31 PM, Guilherme Russi wrote:
>>         Yes, it is at the same place
>>
>>         cat /etc/cinder/cinder.conf
>>         [DEFAULT]
>>         rootwrap_config=/etc/cinder/rootwrap.conf
>>         sql_connection = mysql://cinder:password@localhost/cinder
>>         api_paste_confg = /etc/cinder/api-paste.ini
>>         iscsi_helper=ietadm
>>         #iscsi_helper = tgtadm
>>         volume_name_template = volume-%s
>>         volume_group = cinder-volumes
>>         verbose = True
>>         auth_strategy = keystone
>>         iscsi_ip_address=localhost
>>         rabbit_host = localhost
>>         rabbit_port = 5672
>>         rabbit_userid = rabbit
>>         rabbit_password = password
>>         rabbit_virtual_host = /nova
>>         state_path = /var/lib/cinder
>>         lock_path = /var/lock/cinder
>>         volumes_dir = /var/lib/cinder/volumes
>>
>>         Another idea?
>>
>>         Regards.
>>
>>
>>         2013/9/23 Gangur, Hrushikesh (HP Converged Cloud - R&D -
>>         Sunnyvale) <hrushikesh.gangur at hp.com
>>         <mailto:hrushikesh.gangur at hp.com>>
>>
>>             Ensure that cinder configuration files have correct IP of
>>             rabbimq host.
>>
>>             *From:*Guilherme Russi [mailto:luisguilherme.cr at gmail.com
>>             <mailto:luisguilherme.cr at gmail.com>]
>>             *Sent:* Monday, September 23, 2013 10:53 AM
>>             *To:* openstack
>>             *Subject:* [Openstack] Cinder error
>>
>>             Hello guys, I'm reinstalling my OpenStack Grizzly and I'm
>>             getting problem with my cinder, I'm getting "*Error:
>>             *Unable to retrieve volume list." I was looking at the
>>             cinder log and I only found this error: ERROR
>>             [cinder.openstack.common.rpc.common] AMQP server on
>>             192.168.3.1:5672 <http://192.168.3.1:5672> is
>>             unreachable: Socket closed. Trying again in 30 seconds.
>>
>>             I have a partition created:
>>
>>             pvdisplay
>>
>>             --- Physical volume ---
>>
>>             PV Name   /dev/sda7
>>
>>             VG Name   cinder-volumes
>>
>>             PV Size   279,59 GiB / not usable 1,00 MiB
>>
>>             Allocatable   yes
>>
>>             PE Size   4,00 MiB
>>
>>             Total PE  71574
>>
>>             Free PE   66454
>>
>>             Allocated PE  5120
>>
>>             PV UUID KHITxF-uagF-xADc-F8fu-na8t-1OXT-rDFbQ6
>>
>>             root at hemera:/home/hemera# vgdisplay
>>
>>             --- Volume group ---
>>
>>             VG Name   cinder-volumes
>>
>>             System ID
>>
>>             Format  lvm2
>>
>>             Metadata Areas  1
>>
>>             Metadata Sequence No  6
>>
>>             VG Access   read/write
>>
>>             VG Status   resizable
>>
>>             MAX LV  0
>>
>>             Cur LV  2
>>
>>             Open LV   0
>>
>>             Max PV  0
>>
>>             Cur PV  1
>>
>>             Act PV  1
>>
>>             VG Size   279,59 GiB
>>
>>             PE Size   4,00 MiB
>>
>>             Total PE  71574
>>
>>             Alloc PE / Size   5120 / 20,00 GiB
>>
>>             Free  PE / Size   66454 / 259,59 GiB
>>
>>             VG UUID mhN3uV-n80a-zjeb-uR35-0IPb-BFmo-G2Qehu
>>
>>             I don't know how to fix this error, any help?
>>
>>             Thank you all and regards.
>>
>>             Guilherme.
>>
>>
>>
>>
>>         _______________________________________________
>>         Mailing list:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>         Post to     :openstack at lists.openstack.org  <mailto:openstack at lists.openstack.org>
>>         Unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>         _______________________________________________
>         Mailing list:
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>         Post to     : openstack at lists.openstack.org
>         <mailto:openstack at lists.openstack.org>
>         Unsubscribe :
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130923/e9a3771e/attachment.html>


More information about the Openstack mailing list