Help needed with SSD TRIM. Cannot understand the commands

Hello all

I am using a new laptop, purchased about two weeks back. It came with an SSD, I cannot seem to figure out how to verify that it has TRIM support and all.
I refer this arch wiki page:
https://wiki.archlinux.org/index.php/Solid_state_drive
From the arch wiki

To verify TRIM support, run:
$ lsblk --discard
And check the values of DISC-GRAN (discard granularity) and DISC-MAX (discard max bytes) columns. Non-zero values indicate TRIM support.

My output:

$ lsblk --discard
NAME        DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda                0        4K       2G         0
├─sda1             0        4K       2G         0
├─sda2             0        4K       2G         0
├─sda3             0        4K       2G         0
├─sda4             0        4K       2G         0
├─sda5             0        4K       2G         0
├─sda6             0        4K       2G         0
└─sda7             0        4K       2G         0
nvme0n1            0      512B       2T         0
├─nvme0n1p1        0      512B       2T         0
└─nvme0n1p2        0      512B       2T         0

DISC_GRAN and DISC_MAX are non zero for every entry. Does that mean my HDD also supports trim? (I never heard that HDD are supposed to be trimmed.)

Again from the Arch Wiki

Alternatively, install hdparm package and run:
# hdparm -I /dev/sda | grep TRIM

  • Data Set Management TRIM supported (limit 1 block)

The relevant outputs on my laptop

$ sudo hdparm -I /dev/sda | grep TRIM
	   *	Data Set Management TRIM supported (limit 10 blocks)
	   *	Deterministic read data after TRIM
$ sudo hdparm -I /dev/nvme0n1 | grep TRIM
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device

This also detects trim support on my HDD, and refuses to run on my ssd. Out of this confusion, I haven’t enables fstrim.service. My ssd hasn’t been trimmed even once. Reading from the wiki page, I want “Periodic Trim” and not “Continuous Trim”.

The relevant part from inxi -F

Drives:
  Local Storage: total: 1.14 TiB used: 184.82 GiB (15.8%)
  ID-1: /dev/nvme0n1 model: CL1-3D256-Q11 NVMe SSSTC 256GB size: 238.47 GiB
  ID-2: /dev/sda vendor: Western Digital model: WD10SPZX-75Z10T3
  size: 931.51 GiB
Partition:
  ID-1: / size: 233.24 GiB used: 13.28 GiB (5.7%) fs: ext4
  dev: /dev/nvme0n1p2
  ID-2: /boot/efi size: 511 MiB used: 12.4 MiB (2.4%) fs: vfat
  dev: /dev/nvme0n1p1
Swap:
  Alert: No Swap data was found.

Could someone please clear up what is happening, and tell me how to (safely) enable trim on my ssd.
Thanks :pray:

Edit : More details here

I think your making this more complicated than it needs to be.

Enabling the fstrim trimer should be safe and solve the problem.

sudo systemctl enable --now fstrim.timer
3 Likes

I will agree with this :sweat_smile:

But I’m still confused about all this. I do a fstrim dry run, and it runs on my hdd, not on my ssd. I suppose the opposite should be happening.

$ fstrim -an
/mnt/hd1: 0 B (dry run) trimmed on /dev/sda6
/mnt/hd2: 0 B (dry run) trimmed on /dev/sda7

sda6 and sda7 are ext4 partitions on HDD which are currently mounted. SSD is at /dev/nvme0n1. So I suppose that even when the fstrim.timer runs, it won’t touch my ssd. Rather it will try to do something on the HDD.
Just for reference:

-a, --all                trim mounted filesystems
-n, --dry-run            does everything, but trim

I’m still reluctant to enable the service :cry:

Thank you

1 Like

Is there something wrong in the BIOS settings for the drives?
And could you show the contents of file /etc/fstab?

HDDs don’t need TRIM.

2 Likes

I have made some changes in bios settings. The only storage related setting I remember changing is that I disabled Intel RST and switched to AHCI. There are battery charge settings I changed, disabled intel turbo boost, disabled bluetooth.

I cannot switch back to Intel RST in the bios, because I suppose it is not easy to dual boot with that enabled. (Linux mint had refused to detect my storage with RST enabled. With AHCI, it worked fine. I didn’t try EnOS with RST enabled.)

As for my fstab, this is how it looks like.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
# <file system>             <mount point>  <type>  <options>  <dump>  <pass>
UUID=9864-BC0B                            /boot/efi      vfat    umask=0077 0 2
UUID=f1f86514-4df2-422c-8a7a-1ad2b16e1013 /              ext4    defaults,noatime 0 1
UUID=f3251a85-d092-4e72-b5ce-d03ab2f888e2 /mnt/hd1	 ext4	 defaults,noatime 0 1
UUID=860c155d-b387-48d1-ac85-522fb2edc7db /mnt/hd2	 ext4	 defaults,noatime 0 1

I manually added the last two entries after install. They are 350gb ext4 partitions each, and correspond do /dev/sda6 and /dev/sda7 respectively.
Other partitions on my HDD are ntfs and they are for windows (dual boot system). Never mounted.

To check some ideas, could you show the output of:

 lsblk -fm
 sudo fdisk -l
1 Like

lsblk -fm

$ lsblk -fm
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINT   SIZE OWNER GROUP MODE
sda                                                                                    931.5G root  disk  brw-rw----
├─sda1
│                                                                                        128M root  disk  brw-rw----
├─sda2
│    ntfs               6ECE1E82CE1E42AF                                               119.9G root  disk  brw-rw----
├─sda3
│    ntfs               2E1690B116907B91                                                 499M root  disk  brw-rw----
├─sda4
│    vfat   FAT32       D821-C2B7                                                        100M root  disk  brw-rw----
├─sda5
│    ntfs               072C5C6311B69261                                                 100G root  disk  brw-rw----
├─sda6
│    ext4   1.0   HDD1  f3251a85-d092-4e72-b5ce-d03ab2f888e2  157.1G    50% /mnt/hd1   355.5G root  disk  brw-rw----
└─sda7
     ext4   1.0   HDD2  860c155d-b387-48d1-ac85-522fb2edc7db    331G     0% /mnt/hd2   355.5G root  disk  brw-rw----
nvme0n1
│                                                                                      238.5G root  disk  brw-rw----
├─nvme0n1p1
│    vfat   FAT32       9864-BC0B                             498.6M     2% /boot/efi    512M root  disk  brw-rw----
└─nvme0n1p2
     ext4   1.0         f1f86514-4df2-422c-8a7a-1ad2b16e1013  208.4G     6% /            238G root  disk  brw-rw----

sudo fdisk -l

$ sudo fdisk -l
[sudo] password for lain:
Disk /dev/nvme0n1: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: CL1-3D256-Q11 NVMe SSSTC 256GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 98A2A4C5-9AA0-41CA-8D18-DFEEB2354D0C

Device           Start       End   Sectors  Size Type
/dev/nvme0n1p1    2048   1050623   1048576  512M Microsoft basic data
/dev/nvme0n1p2 1050624 500118158 499067535  238G Linux filesystem


Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10SPZX-75Z
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F2276898-8186-4D44-B608-E9DD4C849FC7

Device          Start        End   Sectors   Size Type
/dev/sda1        2048     264191    262144   128M Microsoft reserved
/dev/sda2      264192  251706071 251441880 119.9G Microsoft basic data
/dev/sda3   251707392  252729343   1021952   499M Windows recovery environment
/dev/sda4   252731392  252936191    204800   100M EFI System
/dev/sda5   252936192  462651391 209715200   100G Microsoft basic data
/dev/sda6   462651392 1208088575 745437184 355.5G Linux filesystem
/dev/sda7  1208088576 1953523711 745435136 355.5G Linux filesystem

I figured out one part of the problem.
fstrim -an trims only hdd partitions.
But sudo fstrim -an runs also on ssd.
I am foolish sometimes. :woman_facepalming: Sorry for wasting your time. :pray:

sudo fstrim -an
/mnt/hd1: 0 B (dry run) trimmed on /dev/sda6
/mnt/hd2: 0 B (dry run) trimmed on /dev/sda7
/boot/efi: 0 B (dry run) trimmed on /dev/nvme0n1p1
/: 0 B (dry run) trimmed on /dev/nvme0n1p2

I am left with one last question. How do I prevent fstrim from touching my HDD. Also, since TRIM on HDD doesn’t make any sense, my hdd should be ignoring trim commands even if fstrim tries to trim hdd partition?

I ran fstrim on empty HDD partition. It takes few seconds and then reports that partition has been trimmed.

$ sudo fstrim /mnt/hd2 -v
[sudo] password for lain:
/mnt/hd2: 348.8 GiB (374530048000 bytes) trimmed

Running it again, it exits instantly and reports zero bytes trimmed.

$ sudo fstrim /mnt/hd2 -v
/mnt/hd2: 0 B (0 bytes) trimmed

So I suppose it did something when I first ran it. HDD are not supposed to be trimmed afaik. The fstrim.timer tries to trim filesystems listed under /etc/fstab, meaning it will trim my hdd every week (since it thinks my hdd has trim support). Might be dangerous for my hdd? :thinking:

Arch wiki says

Warning: Users need to be certain that their SSD supports TRIM before attempting to use it. Data loss can occur otherwise!

never see that trim was working on a spinning disk…

i mean:

  DISC-GRAN  discard granularity
  DISC-MAX  discard max bytes

This could not be there on a HDD aka spinning disk… i see sda5/6 are mounted on the filesystem of the ssd … but this should not be an issue…or could it?

This might be an issue.

sda6/7 are ext4 mounted on ssd. Is there some other way to access it without mounting on ssd?

Also, “disc-gran” and “disk-max” are supposed to be zero for HDD partitions, but its not so in my case. Maybe that is the reason why fstrim tries to trim sda6/7 also.

i think mounting it should not be the issue, you could try booting from LiveISO and see if it will show the same non zero on lsblk there.

1 Like

Will try and report. :+1:

@joekamprad
I get the same output as earlier.

NAME        DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
loop0              0        0B       0B         0
sda                0        4K       2G         0
├─sda1             0        4K       2G         0
├─sda2             0        4K       2G         0
├─sda3             0        4K       2G         0
├─sda4             0        4K       2G         0
├─sda5             0        4K       2G         0
├─sda6             0        4K       2G         0
└─sda7             0        4K       2G         0
sdb                0        0B       0B         0
├─sdb1             0        0B       0B         0
├─sdb2             0        0B       0B         0
└─sdb3             0        0B       0B         0
nvme0n1            0      512B       2T         0
├─nvme0n1p1        0      512B       2T         0
└─nvme0n1p2        0      512B       2T         0

sdb is my pendrive which I used to boot.

I downloaded latest Arch Linux iso and booted it. Ran lsblk --discard and it also thinks that my HDD can be trimmed.

Is it possible that the HDD is actually a hybrid drive, including an SSD?

1 Like

I researched some more and it looks like my HDD uses some technology which I wasn’t yet aware of.

So the HDD that’s there on my laptop uses Shingled Magnetic Recording. I am not geek enough to completely understand this thing, but what I understood is that it has denser data blocks. Writing data to a track requires the data on adjacent tracks to be re-written too, making the writes slower. A concern for people like me who write lot of data.

These types of HDD need trimming. And so lsblk and fstrim reported trim support on my HDD.

The general image of these HDD on the internet doesn’t look too good. In my case, the product listing on Amazon didn’t mention any such thing. Even on the Dell website, it doesn’t say that my HDD is SMR.
https://www.reddit.com/r/DataHoarder/comments/57eosc/smr_drives_aka_archive_drives_a_word_of_caution/

From Wikipedia

Western Digital, Toshiba and Seagate have sold SMR drives without labeling them as such, generating a large controversy, as SMR drives are much slower in some circumstances than PMR drives.[13] These practices were used in both data storage-dedicated (for servers, NASes and cold storage) and consumer-centric HDDs.

Edit: So I’ll enable the fstrim.timer and let it trim my hdd partition too, as it was doing earlier. Thanks everyone for your time.

7 Likes

Not perhaps relevant to the purpose here - but as a note the fstab entries you made for the extra drives should be terminated by 0 2 rather than 0 1 - as fsck can’t do them all first! Root / needs that option at 1 (and it usually added automatically) but other drives, if not to be skipped - should be marked as 2.

1 Like

Thanks for the suggestion.

Should I make both extra mounts as 0 2 or one as 0 2 and the other as 0 3?

You learn something new every day! Thanks for sharing the info.

yea… i do not remember also to ever read about Shingled drives :brain: