Ceph outage cause filesystem error on VM
Satish Patel
satish.txt at gmail.com
Thu Feb 16 14:56:48 UTC 2023
Folks,
I am running a small 3 node compute/controller with 3 node ceph storage in
my lab. Yesterday, because of a power outage all my nodes went down. After
reboot of all nodes ceph seems to show good health and no error (in ceph
-s).
When I started using the existing VM I noticed the following errors. Seems
like data loss. This is a lab machine and has zero activity on vms but
still loses data and the file system corrupt. Is this normal ?
I am not using eraser coding, does that help in this matter?
blk_update_request: I/O error, dev sda, sector 233000 op 0x1: (WRITE) flags
0x800 phys_seg 8 prio class 0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230216/cea24939/attachment.htm>
More information about the openstack-discuss
mailing list