ceph power outage cause filesystem error on vms

Satish Patel satish.txt at gmail.com
Thu Feb 16 05:09:44 UTC 2023


Folks,

I am running a small 3 node compute/controller with 3 node ceph storage in
my lab. Yesterday, because of a power outage all my nodes went down. After
reboot of all nodes ceph seems to show good health and no error (in ceph
-s).

When I started using the existing VM I noticed the following errors. Seems
like data loss. This is a lab machine and has zero activity on vms but
still loses data and the file system corrupt. Is this normal ?

I am not using eraser coding, does that help in this matter?

blk_update_request: I/O error, dev sda, sector 233000 op 0x1: (WRITE) flags
0x800 phys_seg 8 prio class 0

find attached screenshot.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230216/d6abe83c/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen Shot 2023-02-16 at 12.00.35 AM.png
Type: image/png
Size: 415577 bytes
Desc: not available
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230216/d6abe83c/attachment-0001.png>


More information about the openstack-discuss mailing list