Setting up and using SSD drives in Fedora Linux - Fedora Support Forums and Community
Results 1 to 5 of 5
  1. #1
    stevea Guest

    Setting up and using SSD drives in Fedora Linux

    Some Practical Linux SSD drive Techniques
    [Notation assumes drive '/dev/sda' is an SSD Flash drive.]

    0/ Does your drive support TRIM ? [INFORMATIONAL]
    su -c 'hdparm -I /dev/sda' | grep TRIM
           *    Data Set Management TRIM supported
           *    Deterministic read data after TRIM
    Asterisk '*' above indicates the feature is present.

    [Note: If your SSD drive does not support TRIM, then some Techniques do not apply].

    1/ Align File Systems & Partitions. [PERFORMANCE]

    Align partitions to at least a multiple of 4KiB (8 - 512B sector) boundaries.
    [Note: gparted defaults to 1MiB alignment]

    $ su -c 'gdisk -l /dev/sda'
    Number  Start (sector)    End (sector)  Size       Code  Name
       1            2048        63490047   30.3 GiB    0700  Linux/Windows data
       2        63490048        80295935   8.0 GiB     8200  Linux swap
       3        80295936       173000703   44.2 GiB    0700  Linux/Windows data
       4       173000704       234440703   29.3 GiB    0700
    Each start sector above is a multiple of 8, so aligned to 4KiB.

    2/ Choose a file system type that supports TRIM. [PERFORMANCE,TRIM-only]

    The 3.0 kernel fully supports TRIM for the following file systems, when the 'discard' mount option is used.
    btrfs, ext4, (v)fat, gfs2, nilfs2, xfs.

    The following file systems can use the 'fstrim' utility to perform a trim of of mounted filesystems.

    btrfs, ext3, ext4, ocfs2 and xfs.

    SWAP block device will use TRIM automatically, without configuration, if available.

    The block manager used by LVM and mdadm (software RAID) support TRIM through to the device. (since 2.6.37 kernel)

    3/ Know your filesystem. [WEAR, btrfs optimization]

    btrfs has an "ssd" mount option that reduces the rate of mirrored superblock & metadata updates.

    4/ File System block size of at least 4KiB should be selected. [PERFORMANCE]

    btrfs, ext3, ext4, gfs2, nilfs2, xfs - default is 4KiB block size.

    ext3,ext4: blocksize is controlled by /etc/mke2fs.conf.
    ocfs2: 4KiB is the minimum.
    vfat: see mkfs.vfat options -s and -S.

    5/ Never use defragmentation on an SSD drive. [WEAR]

    SSD defragmentation generates writes which wear the drive, and does not significantly improve performance.

    6/ Never use 'shred' or similar utilities to scrub files. [WEAR]

    Utilities such as 'shred' overwrite a file repeatedly. SSDs writes are NOT to the same physical block, so these methods cause wear with no advantage. 'rm' is sufficient.

    7/ Eliminate "Elevator" disk scheduler. [PERFORMANCE, RELIABILITY]

    Rotating disks use the 'elevator' algorithm to schedule block access. This algorithm makes no sense for a SSD media where there are no cylinders and seek time is nearly constant.

    Individual drives can use distinct scheduling algorithms by adding the following line to /etc/rc.d/rc.local
    echo noop > /sys/block/sda/queue/scheduler

    This causes operations to complete more nearly in-order, which can be an advantage for crash recovery.

    8/ Reduce SSD Write Activity. [WEAR]
    This is a list of methods to reduce the amount of SSD drive writes and therefore reduce wear and increase drive lifespan. Each method has both PROs and CONs. These techniques are listed roughly in order from least controversial, to most controversial.

    8A/ Minimize file and directory access-time writes.

    POSIX filesystems record access time (atime) for files. Storing access time means that each read or write a file or directory read operation causes an eventual write operation. These access time writes can be reduced in three distinct ways with the 'relatime', 'nodiratime' and 'noatime' mount options.

    i) 'relatime' causes the file system to only update the access time to keep it in the correct relative order, and is mount default for recent Fedora. A few legacy applications like 'mutt' require this. This technique causes a moderate reduction in metadata writes.

    ii) 'nodiratime' causes the file system to not update access times for directory entry accesses.

    iii) 'noatime' prevents atime writes for both directories and files, and effetively eliminates all atime writes.

    PRO: Each method reduces writes and therefore drive wear.
    CON: Each method reduces the amount of atime information.

    RECOMMENDATION: Unless you use mutt or similar, mounting with 'noatime' is strongly recommended for SSD drives.

    8B/ Reduce log writes to SSD.

    The system logs create numerous continuing log files and these can cause voluminous writes. Alternatives are to reduce the volume of messages, write logs to a rotating drive or to send log messages over the net to another system for recording, or to write logs to a ramdisk.

    i) Reduce the volume of log messages.
    Accomplished by raising the privilege level of messages logged, for example from 'info' and above, to 'warn' and above in the /etc/rsyslog.conf file. See "man rsyslog.conf" for details.

    PRO: reduces log volume
    CON: reduces log information content

    ii) Write logs to a rotating drive.
    An fstab entry similar to ....
    /dev/sdb4 /var/log ext4 noatime 0 0
    causes the device to be mounted on top of /var/log directory, however this mount will occur after rsyslog has already opened some log files. So after the mount (perhaps in /etc/rc.d/rc.local) it is necessary to send rsyslogd a SIGHUP signal to cause it to close and reopen log files.

    /bin/kill -s HUP $(cat /var/run/

    PRO: completely eliminates log writes to SSD
    CON: requires a separate rotating disk

    iii) Send logs over the net.
    There are three transport protocols supported by current rsyslogd; UDP, TCP and RELP. Configuration is discussed in "man rsyslog.conf". The receiving system firewall and rsyslogd must be configured to accept the log messages.

    PRO: completely eliminates log writes to SSD
    CON: requires a separate log server

    iv) Write logs to a ramdisk
    An fstab entry similar to ....
    none /var/log tmpfs size=10% 0 0
    and a /etc/rc.d/rc.local line similar to
    /bin/kill -s HUP $(cat /var/run/
    allocate a maximum of 10% of dram for use by logs.

    PRO: completely eliminates log writes to SSD
    CON: uses some DRAM. If dram allocation fills, then the logs will cease. On a crash or reboot, all log message are lost.
    COMMENT: Despite the considerable CONs this method is practical.

    RECOMMENDATION: Technique 'i' is recommended without reservation and technique 'iv' is suggested as an aggressive approach.

    8C/ Configure browser to use memory caches.

    For firefox, use the URL "about:config" and set
    browser.cache.disk.enable false
    browser.cache.memory.capacity 65536

    browser.cache.memory.max_entry_size 65536

    Similar for thunderbird and chrome browser.

    PRO: No apparent performance deficit.
    CON: Uses a small amount of DRAM.

    RECOMMENDATION: Unqualified recommendation.

    8D/ Move /tmp to ramdisk

    Add this line to /etc/fstab
    none /tmp tmpfs size=10% 0 0
    where 10% is the maximum fraction of DRAM available to /tmp.

    PRO: All /tmp writes avoid SSD wear.
    CON: /tmp may fill, /tmp content, tho' generally unimportant is lost after reboot.

    RECOMMENDATION: recommended

    8E/ Consider journals for ext3, ext4.

    WARNING: *** DO NOT run without a journal until you understand the potential hazards.

    Journals require considerable additional disk I/O so they cause wear and performance degradation.

    File system journals allow for clean(er) crash recovery. The boot after an 'unclean' shutdown your linux system will fsck file systems and attempt to mount and proceed automatically. This is generally successful with journaling in place. Running without a journal makes catastrophic file system failure more probable, though still unlikely.

    Two approaches:

    i) Eliminate journals altogether (*controversial*)
    Some filesystems may be easily reconstructed, making risk acceptable.

    To eliminate journalling, do this to a unmounted or read-only ext3/ext4
    tune2fs -O ^has_journal /dev/sda1

    ii) Reduce the amount of journal writes
    Mount each SSD ext3 or ext4 with the "data=writeback" option. This means the ext4 will only journal metadata.

    Journaling mode of ext3 and ext4 file systems cannot be changed after mount. So 'regular; file systems like /home can have their journalling mode set in the fstab like...

    UUID=308[...]d56 /home ext4 discard,noatime,data=writeback 0 0

    But the root '/' is mounted by the inintramfs the parameter MUST be included in the grub.cfg configuration file, like,

    linux /boot/vmlinuz- ... elevator=noop rootflags=data=writeback

    [WARNING: **** for Linux 3.0 and later kernels, setting the ext3/4 root file system option "data=..." in fstab may prevent the remount and may cause boot failure].

    RECOMMENDATION: Method 'ii' is safe and has performance and wear advantages. Method 'i' (journal elimination) should only be used when file system reconstruction is available.

    Last edited by antikythera; 11th January 2017 at 10:53 AM. Reason: 8C - added reference to thunderbird as the same works there too

  2. #2
    stevea Guest

    Re: SSD drives under Linux


    9/ Do we really need or want SWAP on an SSD ? [WEAR]
    Linux Swap has two important uses; page swapping and suspending to disk as used by hibernate.

    When the system is running low on memory and more is requested, the kernel can free some pages by writing these to swap. When there is no swap, or when the swap is filled, the kernel must kill some process and harvest it's pages. Page swapping means that all processes can continue (till swap is filled), but at a very slow pace. Running out of swap means that processes are killed and the system behavior will seem erratic. For an end-user, interactive system the poor swap performance may make the system unusable.

    In the case of hibernate, a laptop or mobile device can copy it's memory image to swap and arranges to reload that image at reboot, and removes system power.

    All - preferentially put swap on a rotating disk if available.
    Server - swap is necessary to avoid erratic behavior.
    Laptop - if hibernate is needed, then an SSD swap partition is
    Interactive PC - swap is practical, but if adequate DRAM is available,
    consider using no swap.

    Adding this to /etc/sysctl.conf
    vm.oom_kill_allocating_task = 1
    causes the OOM killer to kill the process requesting extra memory (less erratic).

    10/ Using an SSD as a general disk cache. [PERFORMANCE]


    These are Linux kernel patches that allow one to use an SSD driveto as a sort of 'cache' for rotating storage. This dramatically improves the average I/O storage performance of the system.. The goal is to improve system performance, at a cost of substantial SSD drive wear. I have not tested these.


  3. #3
    stevea Guest

    Re: SSD drives under Linux

    Background Discussion of SSD drive Issues

    Basic FAQ questions are answered here:

    Modern flash SSD drives storage have several characteristics.
      Fast read/write speed.
      Ignorably small and constant-valued seek times.
      Very low total power consumption (often <2W peak).
      Lightweight, zero noise.
      Tolerant of harsh environments (temps, shock, vibration) 
      A limited number of writes per cell.  Current high density MLC chips are rated at only 3000 write cycles !
      Cell errors will develop as the drive is used.
      An entire page (often 128KB to 1MB of storage) must be erased at one time and this is a slow process (10s of milliseconds).
      High cost per unit storage (currently ~$1.50USD/GiB)
    An erased page of flash is set to a fixed state (imagine all zero bits). New data may be written anywhere to the page, even in small units. It's possible to change any "zero" bit to a "one, but but once a bit is written to a "one" it cannot be erased to a "zero" except when the entire large page (128KB-1MB) is erased.

    The SSD controllers must account for the limitations of high density flash chips including error management, wear leveling, and page-erase issues. Modern SSD controllers implement error detection and correction (EDAC). Some implement a RAID-5 like arrangement among separate flash chips. Because modern SSD controllers have robust error detection it seems likely these drives will fail gracefully, with ample warnings via S.M.A.R.T. parameters. There are some reports that this is not the case.

    On a rotating drive the sector numbers for a SATA/PATA bus command for I/O refer to fixed physical sectors on the drive. So sector 0 (the MBR) is always the same physical location on the drive. For flash SSD drives this would cause excessive wear and failure at frequently written sectors. Instead when a bus command causes a write, the location in flash for this data is determined based on some wear-leveling algorithm to avoid excessive writes to one region of flash. In addition the SSD controller must record the 'remap' location the sector since it is not a simple physical mapping.

    Drives with wear-leveling are rated in "Total Bytes Written"(TBW) limits and this figure is usually directly proportional to the drive capacity. By late 2009 some high quality drives had a TBW limit of 100 times the drive capacity (for example 6TB of total writes to a 60GB drive). In a recent (11/2011) scan many good quality SSD drives advertise a TBW limit about 600 times drive capacity (e.g. 36TB of writes to a 60GB drive), and a few over 1000 times. Some large capacity SSD drives advertise a lower TBW factor as they are composed of multiple non-redundant units where the unit BTW defines the system BTW.

    To avoid long delays while flash pages are erased - the controller must arrange to have clean erased flash pages available. So the controller must at times internally copy off any active data from a 'mostly written' flash page and then erase the page. In early SSD drives (until late 2009 or early 2010) this sort of "garbage collection", freeing up and erasure of pages was a major cause of drive performance slowing with use. In these older drives, most sectors of flash had been written, and the controller was ignorant of which sectors were no longer needed (deleted files, 'dead' data). So when a write occurred the controller needed to copy data off the mapped page, erase the page, modify the one written sector, and write the data back to the page. This is a problem called write amplification.

    To solve this problem a set of TRIM bus commands were developed so that the kernel can inform the drive when data sectors become unused, 'dead' or deleted. SSD drive controllers with TRIM support can track which sectors are no longer in use and these controllers will often identify "mostly written, mostly dead" pages and erase these in the background. Even when background erasures aren't possible, under heavy write loads, the TRIM features greatly decreases the write amplification factor. TRIM is a huge performance advantage.

    Modern SSD controllers often have some have some added functionality.

    Most controllers that support TRIM ensure that any TRIM'ed or deleted sector can never be re-read. Instead all zeroes are substituted for the data content after the TRIM. The data may still be present in flash, but the controller refuses to make it available. This is intended as a security feature, so when you delete/TRIM data it is never again readable. The downside is that you cannot recover deleted files once TRIM'ed. The disk scanning tools are useless in such cases.

    Many SSD w/ TRIM controllers support disk encryption in hardware. On some drives an external key is provided, but more commonly (today) there is a fixed key that encrypts all the data on the flash chips. IOW the data on the flash chips is quite secure - anything erased/TRIM'ed is effectively destroyed.

    Because current SSD drives are low power, and because the drives may be internally shuffling data for page-erasure, etc, many are build with capacitor or 'super cap' energy storage to prevent the possibility of a power outage confusing the drives internal state. The ability to complete all cached write operations reduces, but does not eliminate the need for file system journals.

    Although most SSD storage today appear with SATA II or III or other disk storage interface, a growing trend is to place the SSD/flash directly on a system bus like PCI-express. The higher speed, lack of cables and generally simpler interface are advantages.

    As of this writing (1/2012) 25nm 64Gb(8GB) flash chips are in mass productions and 20nm 128Gb(16GB) parts have been announced. There is good reason to believe that higher capacity and lower power SSD drives are on the horizon, however price predictions are not as optimistic.


    Because an SSD drive is merely a circuit board, there are in excess of 100 SSD drive manufacturers today. The main distinguishing features of SSD drives are their controller chips. Be sure to understand the features of the specific controller before investing.

  4. #4
    Join Date
    Jun 2006
    Paris, TX
    0 Post(s)
    0 Thread(s)

    Re: Setting up and using SSD drives in Fedora Linux

    Steve has re-written this guide and to avoid a lot of nonsense being appended to it, it has been stuck and closed. If you have more questions about it, start a new thread in the other (non-Guides and Solutions) forums for it.

    Thanks, Stevea. <....>

  5. #5
    Join Date
    Jun 2006
    Paris, TX
    0 Post(s)
    0 Thread(s)

    Re: Setting up and using SSD drives in Fedora Linux


    Stevea: This is getting some age on it. Are there any changes we need to make here? (via PM)

Similar Threads

  1. Setting up Samba to share your Linux folders to Windows in Fedora 8
    By fadzly in forum Guides & Solutions (Not For Questions)
    Replies: 11
    Last Post: 10th March 2013, 01:02 PM
  2. SSD drives under Linux - discussion thread
    By stevea in forum Guides & Solutions (Not For Questions)
    Replies: 77
    Last Post: 19th July 2012, 04:29 PM
  3. Help setting up mapped network drives
    By kelzOKO in forum Using Fedora
    Replies: 0
    Last Post: 22nd February 2011, 12:26 PM
  4. map windows drives on linux
    By iinfi in forum Servers & Networking
    Replies: 14
    Last Post: 16th February 2009, 06:53 AM
  5. Need help setting up my network/total newbie with Fedora and linux
    By southernlady in forum Servers & Networking
    Replies: 3
    Last Post: 9th September 2005, 06:44 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts