<div dir="ltr"><div>This looks very familiar.... <a href="https://bugzilla.redhat.com/show_bug.cgi?id=848942">https://bugzilla.redhat.com/show_bug.cgi?id=848942</a><br>I'm not sure this is exactly the same bahavior, try turning debug messages on by adding -d to the tgtd start.<br>
</div><div>I'm investigating a similar issue with tgt on test_boot_pattern test. It turns out that the file written to the volume in the first instance is not found by the second instance. But that's a separate issue.<br>
<br></div><div>-rfolco<br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Mar 10, 2014 at 12:54 PM, Sukhdev Kapur <span dir="ltr"><<a href="mailto:sukhdevkapur@gmail.com" target="_blank">sukhdevkapur@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I see the same issue. This issue has crept in during the latest flurry of check-ins. I started noticing this issue a day or two before the Icehouse Feature Freeze deadline.<div>
<br></div><div>I tried restarting tgt as well, but, it does not help. </div>
<div><br></div><div>However, rebooting the VM helps clear it up.</div><div><br></div><div>Has anybody else seen it as well? Does anybody have a solution for it? </div><div><br></div><div>Thanks</div><span class="HOEnZb"><font color="#888888"><div>
-Sukhdev</div><div>
<br></div><div><br></div><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Mar 10, 2014 at 8:37 AM, Dane Leblanc (leblancd) <span dir="ltr"><<a href="mailto:leblancd@cisco.com" target="_blank">leblancd@cisco.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I don't know if anyone can give me some troubleshooting advice with this issue.<br>
<br>
I'm seeing an occasional problem whereby after several DevStack <a href="http://unstack.sh/stack.sh" target="_blank">unstack.sh/stack.sh</a> cycles, the tgt daemon (tgtd) fails to start during Cinder startup. Here's a snippet from the stack.sh log:<br>
<br>
2014-03-10 07:09:45.214 | Starting Cinder<br>
2014-03-10 07:09:45.215 | + return 0<br>
2014-03-10 07:09:45.216 | + sudo rm -f /etc/tgt/conf.d/stack.conf<br>
2014-03-10 07:09:45.217 | + _configure_tgt_for_config_d<br>
2014-03-10 07:09:45.218 | + [[ ! -d /etc/tgt/stack.d/ ]]<br>
2014-03-10 07:09:45.219 | + is_ubuntu<br>
2014-03-10 07:09:45.220 | + [[ -z deb ]]<br>
2014-03-10 07:09:45.221 | + '[' deb = deb ']'<br>
2014-03-10 07:09:45.222 | + sudo service tgt restart<br>
2014-03-10 07:09:45.223 | stop: Unknown instance:<br>
2014-03-10 07:09:45.619 | start: Job failed to start<br>
jenkins@neutronpluginsci:~/devstack$ 2014-03-10 07:09:45.621 | + exit_trap<br>
2014-03-10 07:09:45.622 | + local r=1<br>
2014-03-10 07:09:45.623 | ++ jobs -p<br>
2014-03-10 07:09:45.624 | + jobs=<br>
2014-03-10 07:09:45.625 | + [[ -n '' ]]<br>
2014-03-10 07:09:45.626 | + exit 1<br>
<br>
If I try to restart tgt manually without success:<br>
<br>
jenkins@neutronpluginsci:~$ sudo service tgt restart<br>
stop: Unknown instance:<br>
start: Job failed to start<br>
jenkins@neutronpluginsci:~$ sudo tgtd<br>
librdmacm: couldn't read ABI version.<br>
librdmacm: assuming: 4<br>
CMA: unable to get RDMA device list<br>
(null): iser_ib_init(3263) Failed to initialize RDMA; load kernel modules?<br>
(null): fcoe_init(214) (null)<br>
(null): fcoe_create_interface(171) no interface specified.<br>
jenkins@neutronpluginsci:~$<br>
<br>
The config in /etc/tgt is:<br>
<br>
jenkins@neutronpluginsci:/etc/tgt$ ls -l<br>
total 8<br>
drwxr-xr-x 2 root root 4096 Mar 10 07:03 conf.d<br>
lrwxrwxrwx 1 root root 30 Mar 10 06:50 stack.d -> /opt/stack/data/cinder/volumes<br>
-rw-r--r-- 1 root root 58 Mar 10 07:07 targets.conf<br>
jenkins@neutronpluginsci:/etc/tgt$ cat targets.conf<br>
include /etc/tgt/conf.d/*.conf<br>
include /etc/tgt/stack.d/*<br>
jenkins@neutronpluginsci:/etc/tgt$ ls conf.d<br>
jenkins@neutronpluginsci:/etc/tgt$ ls /opt/stack/data/cinder/volumes<br>
jenkins@neutronpluginsci:/etc/tgt$<br>
<br>
I don't know if there's any missing Cinder config in my DevStack localrc files. Here's one that I'm using:<br>
<br>
MYSQL_PASSWORD=nova<br>
RABBIT_PASSWORD=nova<br>
SERVICE_TOKEN=nova<br>
SERVICE_PASSWORD=nova<br>
ADMIN_PASSWORD=nova<br>
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit<br>
enable_service mysql<br>
disable_service n-net<br>
enable_service q-svc<br>
enable_service q-agt<br>
enable_service q-l3<br>
enable_service q-dhcp<br>
enable_service q-meta<br>
enable_service q-lbaas<br>
enable_service neutron<br>
enable_service tempest<br>
VOLUME_BACKING_FILE_SIZE=2052M<br>
Q_PLUGIN=cisco<br>
declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)<br>
declare -A Q_CISCO_PLUGIN_SWITCH_INFO=([10.0.100.243]=admin:Cisco12345:22:neutronpluginsci:1/9)<br>
NCCLIENT_REPO=git://<a href="http://github.com/CiscoSystems/ncclient.git" target="_blank">github.com/CiscoSystems/ncclient.git</a><br>
PHYSICAL_NETWORK=physnet1<br>
OVS_PHYSICAL_BRIDGE=br-eth1<br>
TENANT_VLAN_RANGE=810:819<br>
ENABLE_TENANT_VLANS=True<br>
API_RATE_LIMIT=False<br>
VERBOSE=True<br>
DEBUG=True<br>
LOGFILE=/opt/stack/logs/stack.sh.log<br>
USE_SCREEN=True<br>
SCREEN_LOGDIR=/opt/stack/logs<br>
<br>
Here are links to a log showing another localrc file that I use, and the corresponding stack.sh log:<br>
<br>
<a href="http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_console_log.txt" target="_blank">http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_console_log.txt</a><br>
<a href="http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_stack_sh_log.txt" target="_blank">http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_stack_sh_log.txt</a><br>
<br>
Does anyone have any advice on how to debug this, or recover from this (beyond rebooting the node)? Or am I missing any Cinder config?<br>
<br>
Thanks in advance for any help on this!!!<br>
Dane<br>
<br>
<br>
<br>
_______________________________________________<br>
OpenStack-Infra mailing list<br>
<a href="mailto:OpenStack-Infra@lists.openstack.org" target="_blank">OpenStack-Infra@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra</a><br>
</blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OpenStack-Infra mailing list<br>
<a href="mailto:OpenStack-Infra@lists.openstack.org">OpenStack-Infra@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra</a><br>
<br></blockquote></div><br></div>