[Openstack-operators] Buffer I/O error on cinder volumes

Michael Stang michael.stang at dhbw-mannheim.de
Wed Jul 27 17:00:23 UTC 2016


Hi all,
 
we found the problem, seems to be an issue in the kernel we had (Ubuntu 14.04.3,
3.19.x) with iscsi or multipath, think rather with the iscsi, maybe only in our
specific konfiguration(?). Seems also that the kernel 4.0.x also suffers from
this, 4.2.x I have not tested. We gone now back to 3.16.x and now the errors are
gone and everything works as it should.
 
We usesd 14.04.3 becaus later Version (14.04.4 and 16.04.0(1) suffers from a
kernel bug for our hardware scsi controller (HP BL465c G8).
 
So the newest OS/Kernel Version is not always the best I think ;-)
 
Kind regards,
Michael
 

> Michael Stang <michael.stang at dhbw-mannheim.de> hat am 26. Juli 2016 um 18:28
> geschrieben:
> 
>  Hi all,
>   
>  we got a strange problem on our new mitaka installation. We have this
> messages in the syslog on the block storage node:
>   
> 
>  Jul 25 09:10:33 block1 tgtd: device_mgmt(246) sz:69
> params:path=/dev/cinder-volumes/volume-41d6c674-1d0d-471d-ad7d-07e9fab5c90d
>  Jul 25 09:10:33 block1 tgtd: bs_thread_open(412) 16
>  Jul 25 09:10:55 block1 kernel: [1471887.006569] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.006585] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.006589] sd 5:0:0:0: [sdc] Add. Sense:
> Invalid field in cdb
>  Jul 25 09:10:55 block1 kernel: [1471887.006590] sd 5:0:0:0: [sdc] CDB:
>  Jul 25 09:10:55 block1 kernel: [1471887.006593] Write(16): 8a 00 00 00 00 00
> 00 1c d0 00 00 00 40 00 00 00
>  Jul 25 09:10:55 block1 kernel: [1471887.006603] blk_update_request: critical
> target error, dev sdc, sector 1888256
>  Jul 25 09:10:55 block1 kernel: [1471887.025141] blk_update_request: critical
> target error, dev dm-0, sector 1888256
>  Jul 25 09:10:55 block1 kernel: [1471887.043979] buffer_io_error: 6695
> callbacks suppressed
>  Jul 25 09:10:55 block1 kernel: [1471887.043981] Buffer I/O error on dev dm-1,
> logical block 235776, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.063894] Buffer I/O error on dev dm-1,
> logical block 235777, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.082592] Buffer I/O error on dev dm-1,
> logical block 235778, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.100903] Buffer I/O error on dev dm-1,
> logical block 235779, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.119625] Buffer I/O error on dev dm-1,
> logical block 235780, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.138360] Buffer I/O error on dev dm-1,
> logical block 235781, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.157247] Buffer I/O error on dev dm-1,
> logical block 235782, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.175086] Buffer I/O error on dev dm-1,
> logical block 235783, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.193637] Buffer I/O error on dev dm-1,
> logical block 235784, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.212358] Buffer I/O error on dev dm-1,
> logical block 235785, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.232830] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.232833] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.232836] sd 5:0:0:0: [sdc] Add. Sense:
> Invalid field in cdb
>  Jul 25 09:10:55 block1 kernel: [1471887.232837] sd 5:0:0:0: [sdc] CDB:
>  Jul 25 09:10:55 block1 kernel: [1471887.232839] Write(16): 8a 00 00 00 00 00
> 00 1d 10 00 00 00 40 00 00 00
>  Jul 25 09:10:55 block1 kernel: [1471887.232847] blk_update_request: critical
> target error, dev sdc, sector 1904640
>  Jul 25 09:10:55 block1 kernel: [1471887.251046] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.251049] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.251052] sd 5:0:0:0: [sdc] Add. Sense:
> Invalid field in cdb
>  Jul 25 09:10:55 block1 kernel: [1471887.251053] sd 5:0:0:0: [sdc] CDB:
>  Jul 25 09:10:55 block1 kernel: [1471887.251054] Write(16): 8a 00 00 00 00 00
> 00 1d 50 00 00 00 40 00 00 00
>  Jul 25 09:10:55 block1 kernel: [1471887.251062] blk_update_request: critical
> target error, dev sdc, sector 1921024
>  Jul 25 09:10:55 block1 kernel: [1471887.269726] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.269729] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.269732] sd 5:0:0:0: [sdc] Add. Sense:
> Invalid field in cdb
>  Jul 25 09:10:55 block1 kernel: [1471887.269733] sd 5:0:0:0: [sdc] CDB:
>  Jul 25 09:10:55 block1 kernel: [1471887.269735] Write(16): 8a 00 00 00 00 00
> 00 1d 90 00 00 00 11 88 00 00
>  Jul 25 09:10:55 block1 kernel: [1471887.269744] blk_update_request: critical
> target error, dev sdc, sector 1937408
>  Jul 25 09:10:55 block1 kernel: [1471887.287739] blk_update_request: critical
> target error, dev dm-0, sector 1904640
>  Jul 25 09:10:55 block1 kernel: [1471887.309002] blk_update_request: critical
> target error, dev dm-0, sector 1921024
>  Jul 25 09:10:55 block1 kernel: [1471887.330162] blk_update_request: critical
> target error, dev dm-0, sector 1937408
>  Jul 25 09:10:55 block1 tgtd: bs_rdwr_request(370) io error 0x9a87c0 35 0 0 0,
> Input/output error
>  Jul 25 09:11:50 block1 tgtd: conn_close(103) connection closed, 0x9a7dc0 1
>  Jul 25 09:11:50 block1 tgtd: conn_close(109) session 0x9a72c0 1
> 
>   
> 
>  bit before we had this messages also on teh compute and the object nodes.
> 
>  We use iscsi storage (HP MSA 2040) over multipathd (4 paths, sdb,sdc,sdd,
> sde) on alle nodes on the compute nodes we have ocfs2 for the instance store,
> to solve this problem we changed the blocksize of the ocfs2 from 4k to 1k then
> the error disappeared.
> 
>  On the objectstore nodes we had xfs but changign the blocksize did not help
> so we used ther also ocfs2 with 1k blocksize so the error was going also away.
> 
>  Now we have the same problem on the blockstorage node with lvm. When we
> create a new volume from an image ther is no error, even the new instance is
> booting fine from the volume. But when we log into the instance than this
> errors pop up in the syslog of the blockstorege node and in the instance we
> get errors that the filesystem is damaged an mounted read only.
> 
>  I tried different multipath.conf configurations also different images, and
> formats (raw/qcow2) but the error stays the same.
> 
>  Did anyone encounter this error so far or maybe know whats might be the
> problem? 
> 
>  Any idea is highly appreciated :-)
> 
>  Thanks and kind regards,
> 
>  Michael
> 
>   
> 
>   
> 

 

> _______________________________________________
>  OpenStack-operators mailing list
>  OpenStack-operators at lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160727/ae0e304a/attachment.html>


More information about the OpenStack-operators mailing list