This discussion pertains to SSDs, since the benchmarks were done on a SSD.
I followed up on all previous discussions about SSD's and obtained some additional advice from several sources. The recommendation was to verify the physical/logical sector sizes, and to allocate partitions on a physical sector boundary. My SSD uses 1k sectors, and that is my boundary. Other SSD's have a 4096 sector size.
To determine the best allocation, review the boundaries. The recommendation was to use gparted, and by clicking on the Partition tab, choose resize/move and verify the free space preceding your partition is zero, and that the allocation size is a multiple of your SSD's physical sector size. I just chose all my partitions outside of boot and uefi to be on a 4 k boundary and multiples of 4k.
A second recommendation is to look at running df
The df command will show the amount of ram in use for tmpdisk. If you have sufficient ram (4gig+) consider adding the following to the /etc/fstab
tmpfs /tmp tmpfs nodev,nosuid,noatime,size=1G 0 0
in lieu of noatime, there was discussion of using relatime if you require last access time information. As I understand it, relatime keeps a cache of that access time info in memory and writes out updates in batches if it deems it has to. So, if some files are frequently accessed, the atime info will be held in that cache and only written out after a long pause in file accesses or on shutdown..
More info is available from many sources such as SUSE, Debian and best, I found a great amount of good information is from
I can't cross post, so perhaps my reply should follow Antikythera's SSD posting.
My own benchmarks are qualitative, I feel that my use of my SSD appears snappier given my adjustment for boundaries, and my use of tmpdisk for /tmp
Back to two feet on the ground. Given my empty SSD, I allocated a 10gig partition, ran a 10gig generator program to fill up that partition and then deleted the file followed by fstrim.
The rest of the tests were to fill the partition. (I wanted to ensure that each test began with the same internal SSD configuration)
I did the same tests for the btrfs, xfs, ext4 and f2fs. I also copied partition to partition (same disk) for each filesystem. I used defaults within /etc/fstab, since I read that relatime
is now a default, unless noatime is specified.
My own benchmarks are qualitative, I feel that my use of my SSD appears snappier given my adjustment for boundaries, and my use of tmpdisk for /tmp from where I ran my generator program (4k output blocksize).
partition to partition for the copies were basically the same. f2fs was 1/2 second faster than xfs (unlike your results in panel 3). xfs may be faster than ext4. All these tests are none scientific. I would have to repeat the tests 19 times to obtain a 5% confidence interval. My system dates from 2009, has 8gigs ram (ddr2), and an asus p5q motherboard.