ceph power outage cause filesystem error on vms
16 Feb
2023
16 Feb
'23
10:09 a.m.
Folks, I am running a small 3 node compute/controller with 3 node ceph storage in my lab. Yesterday, because of a power outage all my nodes went down. After reboot of all nodes ceph seems to show good health and no error (in ceph -s). When I started using the existing VM I noticed the following errors. Seems like data loss. This is a lab machine and has zero activity on vms but still loses data and the file system corrupt. Is this normal ? I am not using eraser coding, does that help in this matter? blk_update_request: I/O error, dev sda, sector 233000 op 0x1: (WRITE) flags 0x800 phys_seg 8 prio class 0 find attached screenshot.
682
Age (days ago)
682
Last active (days ago)
0 comments
1 participants
participants (1)
-
Satish Patel