[openstack-dev] [OpenStack-Infra] tgt restart fails in Cinder startup "start: job failed to start"
Sukhdev Kapur
sukhdevkapur at gmail.com
Mon Mar 10 17:36:37 UTC 2014
Hey Rafael,
If it helps any, I have noticed that this issue generally occurs after
several runs of tempest tests. I run all smoke tests. I have been always
suspicious that some new test got added which is not cleaning up
afterwards. Can you point me to which tempest test corresponds to
test_boot_pattern? I can do some investigation as well.
Thanks
-Sukhdev
On Mon, Mar 10, 2014 at 10:10 AM, Rafael Folco <rafaelfolco at gmail.com>wrote:
> This looks very familiar....
> https://bugzilla.redhat.com/show_bug.cgi?id=848942
> I'm not sure this is exactly the same bahavior, try turning debug messages
> on by adding -d to the tgtd start.
> I'm investigating a similar issue with tgt on test_boot_pattern test. It
> turns out that the file written to the volume in the first instance is not
> found by the second instance. But that's a separate issue.
>
> -rfolco
>
>
> On Mon, Mar 10, 2014 at 12:54 PM, Sukhdev Kapur <sukhdevkapur at gmail.com>wrote:
>
>> I see the same issue. This issue has crept in during the latest flurry of
>> check-ins. I started noticing this issue a day or two before the Icehouse
>> Feature Freeze deadline.
>>
>> I tried restarting tgt as well, but, it does not help.
>>
>> However, rebooting the VM helps clear it up.
>>
>> Has anybody else seen it as well? Does anybody have a solution for it?
>>
>> Thanks
>> -Sukhdev
>>
>>
>>
>>
>>
>> On Mon, Mar 10, 2014 at 8:37 AM, Dane Leblanc (leblancd) <
>> leblancd at cisco.com> wrote:
>>
>>> I don't know if anyone can give me some troubleshooting advice with this
>>> issue.
>>>
>>> I'm seeing an occasional problem whereby after several DevStack
>>> unstack.sh/stack.sh cycles, the tgt daemon (tgtd) fails to start during
>>> Cinder startup. Here's a snippet from the stack.sh log:
>>>
>>> 2014-03-10 07:09:45.214 | Starting Cinder
>>> 2014-03-10 07:09:45.215 | + return 0
>>> 2014-03-10 07:09:45.216 | + sudo rm -f /etc/tgt/conf.d/stack.conf
>>> 2014-03-10 07:09:45.217 | + _configure_tgt_for_config_d
>>> 2014-03-10 07:09:45.218 | + [[ ! -d /etc/tgt/stack.d/ ]]
>>> 2014-03-10 07:09:45.219 | + is_ubuntu
>>> 2014-03-10 07:09:45.220 | + [[ -z deb ]]
>>> 2014-03-10 07:09:45.221 | + '[' deb = deb ']'
>>> 2014-03-10 07:09:45.222 | + sudo service tgt restart
>>> 2014-03-10 07:09:45.223 | stop: Unknown instance:
>>> 2014-03-10 07:09:45.619 | start: Job failed to start
>>> jenkins at neutronpluginsci:~/devstack$ 2014-03-10 07:09:45.621 | +
>>> exit_trap
>>> 2014-03-10 07:09:45.622 | + local r=1
>>> 2014-03-10 07:09:45.623 | ++ jobs -p
>>> 2014-03-10 07:09:45.624 | + jobs=
>>> 2014-03-10 07:09:45.625 | + [[ -n '' ]]
>>> 2014-03-10 07:09:45.626 | + exit 1
>>>
>>> If I try to restart tgt manually without success:
>>>
>>> jenkins at neutronpluginsci:~$ sudo service tgt restart
>>> stop: Unknown instance:
>>> start: Job failed to start
>>> jenkins at neutronpluginsci:~$ sudo tgtd
>>> librdmacm: couldn't read ABI version.
>>> librdmacm: assuming: 4
>>> CMA: unable to get RDMA device list
>>> (null): iser_ib_init(3263) Failed to initialize RDMA; load kernel
>>> modules?
>>> (null): fcoe_init(214) (null)
>>> (null): fcoe_create_interface(171) no interface specified.
>>> jenkins at neutronpluginsci:~$
>>>
>>> The config in /etc/tgt is:
>>>
>>> jenkins at neutronpluginsci:/etc/tgt$ ls -l
>>> total 8
>>> drwxr-xr-x 2 root root 4096 Mar 10 07:03 conf.d
>>> lrwxrwxrwx 1 root root 30 Mar 10 06:50 stack.d ->
>>> /opt/stack/data/cinder/volumes
>>> -rw-r--r-- 1 root root 58 Mar 10 07:07 targets.conf
>>> jenkins at neutronpluginsci:/etc/tgt$ cat targets.conf
>>> include /etc/tgt/conf.d/*.conf
>>> include /etc/tgt/stack.d/*
>>> jenkins at neutronpluginsci:/etc/tgt$ ls conf.d
>>> jenkins at neutronpluginsci:/etc/tgt$ ls /opt/stack/data/cinder/volumes
>>> jenkins at neutronpluginsci:/etc/tgt$
>>>
>>> I don't know if there's any missing Cinder config in my DevStack localrc
>>> files. Here's one that I'm using:
>>>
>>> MYSQL_PASSWORD=nova
>>> RABBIT_PASSWORD=nova
>>> SERVICE_TOKEN=nova
>>> SERVICE_PASSWORD=nova
>>> ADMIN_PASSWORD=nova
>>>
>>> ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,rabbit
>>> enable_service mysql
>>> disable_service n-net
>>> enable_service q-svc
>>> enable_service q-agt
>>> enable_service q-l3
>>> enable_service q-dhcp
>>> enable_service q-meta
>>> enable_service q-lbaas
>>> enable_service neutron
>>> enable_service tempest
>>> VOLUME_BACKING_FILE_SIZE=2052M
>>> Q_PLUGIN=cisco
>>> declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)
>>> declare -A
>>> Q_CISCO_PLUGIN_SWITCH_INFO=([10.0.100.243]=admin:Cisco12345:22:neutronpluginsci:1/9)
>>> NCCLIENT_REPO=git://github.com/CiscoSystems/ncclient.git
>>> PHYSICAL_NETWORK=physnet1
>>> OVS_PHYSICAL_BRIDGE=br-eth1
>>> TENANT_VLAN_RANGE=810:819
>>> ENABLE_TENANT_VLANS=True
>>> API_RATE_LIMIT=False
>>> VERBOSE=True
>>> DEBUG=True
>>> LOGFILE=/opt/stack/logs/stack.sh.log
>>> USE_SCREEN=True
>>> SCREEN_LOGDIR=/opt/stack/logs
>>>
>>> Here are links to a log showing another localrc file that I use, and the
>>> corresponding stack.sh log:
>>>
>>>
>>> http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_console_log.txt
>>>
>>> http://128.107.233.28:8080/job/neutron/1390/artifact/vpnaas_stack_sh_log.txt
>>>
>>> Does anyone have any advice on how to debug this, or recover from this
>>> (beyond rebooting the node)? Or am I missing any Cinder config?
>>>
>>> Thanks in advance for any help on this!!!
>>> Dane
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-Infra mailing list
>>> OpenStack-Infra at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>>>
>>
>>
>> _______________________________________________
>> OpenStack-Infra mailing list
>> OpenStack-Infra at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140310/c61ef684/attachment.html>
More information about the OpenStack-dev
mailing list