SD card's file system written over all attached mass storage devices. Possible cause?
The file system from an inserted SD card (/dev/mmcblk0) was unsolicitedly written to the beginning of all attached mass storage devices, including the live USB flash drive that hosted the running Linux installation itself, resulting in their file systems being destroyed, and I have no idea what possibly caused it.
Prior, I was doing some read-only performance benchmarks using the GNOME Disks software and the ddrescue command-line utility. On ddrescue, I specified /dev/null as the output file and monitored the speed using "iotop". At some point, the computer became unresponsive. I waited for roughly a minute and then opened tty2.
tty2 was full of ext4 error messages such as
directory contains a hole at offset 0 and
lblock 0 mapped to illegal pblock (length 1). At this point, it was clear that something had gone seriously wrong.
As expected, the OS would not boot the next time. Then I inserted the operating system drive, a live USB stick, into a different computer running Windows, and examined it using IsoBuster, and it did not show an ext4 but an exFAT file system, containing folders with names such as DCIM, with the same dates as the directories on an SD card that was inserted into the computer at the time of the crash. The file manager of Windows itself offered to format the USB stick, meaning it failed to load the file system.
The same also happened to the other attached mass storage devices and the internal data storage. Obviously, it is irrepairable and the operating system needs to be re-installed. Much of the data is backed up elsewhere. However, no ever-so-reliable data storage device would have protected against this kind of failure. Even the file systems of backup media could have been destroyed. Since this failure happened on block-level, at least MTP (Media Transfer Protocol), which is usually terrible due to poor performance, would have protected against this kind of failure. What makes such a condition especially dangerous is that it can destroy backup media as well, so even if one has properly done ones backup chores, it could still destroy data or make it difficult to recover.
Does anyone have the slightest idea what might have caused this? Could it have been malware? Did the random access tests by GNOME disks confuse the controller (if such a thing is even possible)?
I have not written into any location other than /dev/null using ddrescue before this happened. Even if a wrong device was picked using ddrescue, it still would not explain what caused it to be written to all attached mass storage devices. In GNOME disks, I unchecked the write benchmark option each time before starting a benchmark.
The ddrescue command I used was:
sudo ddrescue /dev/sdf /dev/null --force I first entered it without the --force flag, and ddrescue told me something about "non-standard file", which I assumed refers to "/dev/null". To make it run, I used the --force flag.
According to the ddrescue manual (a filter prevents me from linking it, but one can easily find it via web search), parameters should be put before the input and output file paths:
ddrescue [options] infile outfile [logfile]
I put the --force flag after the "infile" and "outfile" parameter, assuming it makes no difference, since it makes no difference on any other command-line tool I remember using, with the exception of "find", which requires putting parameters such as -iname after the path.
Could putting --force after the parameters have been what caused it? However, that still would not explain how the contents of /dev/mmcblk0 ended up on the other block devices. Also, the fact that ddrescue did only run after putting --force at the end suggests it does recognize parameters after the command correctly. However, under no circumstance was the block device (e.g. "/dev/sdf") after "/dev/null" in the command line. When I watched the speed inside iotop, it seemed realistic for each device, suggesting data was indead read correctly from the devices.
To me, this is a first-time occurance. I have never experienced such a strange thing on a computer before. The closest explanation I can think of is some defect in the GNOME Disks benchmark, though I have used the utility several times before and never once did this occur.
Playing with dd type of tools is a dangerous business. The man page on v.1.23-2+b1 states the following:
Usage: ddrescue [options] infile outfile [mapfile] Always use a mapfile unless you know you won't need it. Without a mapfile, ddrescue can't resume a rescue, only reinitiate it. NOTE: In versions of ddrescue prior to 1.20 the mapfile was called 'logfile'. The format is the same; only the name has changed.
Perhaps the command got confused when you used the --force flag in the end? The best way to get to the bottom of this is to try to recreate the issue mixing up different parameters and exploring the consequences.
Anyways I do not understand why would you measure read speed using ddrescue. Gnome Disks does a great job. If you want to use CLI then why not use pv and cat, it's much safer.
pv /dev/sda | cat > /dev/null 1GiB 0:00:22 [ 468MiB/s] [========> ] 5% ETA 0:05:48