Update: I was wrong about a couple things:
- I’m having issues with 2 different NTFS drives. The ext4 one is fine.
- The issue appears to be driver related. If the drive auto-mounts by udisks2.service, it shows up as “type ntfs3” with the output of mount, but if I mount it manually it shows up as “type fuseblk”. It seems that the fuse-based driver works but the ntfs3 one is broken.
I’m at my wits end with this one.
SSDs on my computer:
- a drive with a Windows 11 install
- a drive with an Ubuntu 22.04 install (the OS I use the most)
- a drive with an Ubuntu 20.04 install (for a piece of touchy software I needed that didn’t seem to like 22.04)
- an NTFS drive to share files across OSes
- an EXT4 drive to share files across OSes
These are all physically separate drives, not partitions of the same drive or something like that. They all SATA SSDs except the Windows one which is nvme.
Ubuntu 22.04 is acting up. It seems that it can write to the NTFS and EXT4 drives fine, but has difficulty reading from them. If I write a file e.g. echo “hello world” > test, the file appears but trying to read it, the file seems empty. I reboot and I can read the file.
When I first encountered this, I thought the NTFS drive was failing, so I did a large rsync (to back up the data) and got some read errors, and then ran a SMART test which came back clean.
Since then, with further testing, only 22.04 seems to have these issues. Both Windows and 20.04 can read and write fine. However, Windows caught some filesystem errors with the drive after the large rsync.
I’m about to reinstall Ubuntu but I’m worried about making things worse somehow. It would be nice to have an idea of what’s going on.
Any advice?
The NTFS one is a Samsung EVO 860 1TB. The ext4 is a cheapo generic brand 256GB.
I’ve got an AMD 5950X CPU. The motherboard is Aorus X570 Elite. Not sure about the SATA controller except it’s whatever comes with that motherboard.
In my searching I found something about Ubuntu changing ntfs and ext4 drivers, but I’m not sure if that’s a change between 20.04 and 22.04 or an earlier one. Also the fact that it’s both drives makes me think it’s probably something else going on.
What I do know is something weird is going on, and my googling so far hasn’t gotten me any good results (just things about not being able to mount drives in the first place, or mounting drives as read only, neither of which are this situation).
If you haven’t already, try running hdparm on your drive to get an idea of if the drives are at least doing large raw reads straight off the disk at an appropriate performance level.
This is output from the little NUC I’m using right now:
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi ├─sda2 8:2 0 464.3G 0 part / └─sda3 8:3 0 976M 0 part [SWAP] # hdparm -i /dev/sda /dev/sda: Model=Samsung SSD 860 EVO 500GB, FwRev=RVT02B6Q, SerialNo=S3YANB0KB24583B ... # hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 1526 MB in 3.00 seconds = 508.21 MB/sec
If your results are really poor for this test then it points more at the drive / cable / controller / linux controller driver.
If the results are okay, then the issue is probably something more like a logical partitioning / filesystem driver issue.
I’m not sure what a good benchmark application for Linux that tests the filesystem layer as well is other than bonnie++ which has been around forever. Someone else might have a more current idea of something to use for this.