/ gets remounted read-only, can't find original cause in dmesg because systemd spams it with "read-only" filesystem errors afterwards

During normal use, my Ubuntu laptop intermittently will encounter some I/O error and then remount / as read-only (yes I’ve checked SMART logs, nothing shows up; I’ve also replaced the drive, no luck). The problem is I can never read the underlying original error (if any is reported) because the filesystem is remounted read-only by then, so the error doesn’t get recorded to disk anywhere, and if I run dmesg the circular buffer has been fully spammed by other processes complaining about read-only filesystem and that’s all that’s visible.

To reiterate, I can’t look through /var/log because the filesystem has been mounted read-only by then, so rsyslog failed to record any errors at all.

I also have limited ability to start new tools because the disk subsystem reports generic I/O errors at this point. Only whatever is cache resident is runnable.

The only thing I can think of is turning /var/log into an in-memory tmpfs and restarting rsyslog, waiting for the error to happen, and hope I can find it in /var/log. Maybe even running some standard tools like cat, grep, less, etc. in a loop so they’ll definitely be available after the disk subsystem fails.

Is there a simpler option?

Here is Solutions:

We have many solutions to this problem, But we recommend you to use the first solution because it is tested & true solution that will 100% work for you.

Solution 1

Set the log_buf_len= kernel parameter via grub to something big like 8M.

Note: Use and implement solution 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply