Fedora Linux Support Community & Resources Center

Go Back   FedoraForum.org > Fedora Resources > Guides & Solutions (No Questions)
FedoraForum Search

Forgot Password? Join Us!

Guides & Solutions (No Questions) Post your guides here (No links to Blogs accepted). You can also append your comments/questions to a guide, but don't start a new thread to ask a question. Use another forum for that.

Reply
 
Thread Tools Search this Thread Display Modes
  #61  
Old 27th November 2011, 04:28 PM
Dan Offline
Administrator
 
Join Date: Jun 2006
Location: Paris, TX
Posts: 23,220
linuxfedorafirefox
Re: SSD drives under Linux

Yeah. I found that a couple of hours ago, just hadn't edited yet.
Reply With Quote
  #62  
Old 27th November 2011, 10:51 PM
synic Offline
Registered User
 
Join Date: Mar 2005
Location: Brisbane, Australia
Age: 42
Posts: 266
linuxopera
Off Topic

Quote:
Originally Posted by Dan View Post
Well ... that resulted in a spectacular failure! <....>

....


Then it locks up tighter than a bull's bum in fly season.
Love your language. I'm not game to talk like that on the forums, even though that is how I feel.
__________________
Discourse and Diatribe
Reply With Quote
  #63  
Old 28th November 2011, 01:00 PM
StephenH Offline
Registered User
 
Join Date: Jul 2004
Location: Wake Forest, NC
Age: 60
Posts: 1,359
linuxfirefox
Re: SSD drives under Linux

I'm leery of trying to add the discard option. The SSD I have is not one of the newer ones, and I don't think it supports the trim option. Just moving the tmp to tmpfs really sped things up though. On the desktop, the first time launching Thunar after a reboot takes several seconds. This is on a quad-core machine with plenty of RAM. On the netbook with a dual-core C50 processor and 1/4 the RAM, it is much faster.

The SSD is one I originally got for the EeePC. I got an adapter to use it with SATA in an Acer Aspire One 722. The model is SuperTalent FPM64GLSE SATA II Mini PCI-E MLC 64GB
__________________
StephenH

"We must understand the reality that just because our culture claims certain things are true it does not mean they are!" --M. Liederbach

http://pilgrim-wanderings.blogspot.com
Reply With Quote
  #64  
Old 23rd December 2011, 06:51 AM
swimfast Offline
Registered User
 
Join Date: Dec 2011
Location: canada
Posts: 2
windows_7chrome
Re: SSD drives under Linux

i get "installation stuck" when trying to install on the SSD, anything I can change?
Reply With Quote
  #65  
Old 25th December 2011, 02:09 PM
stevea Offline
Registered User
 
Join Date: Apr 2006
Location: Ohio, USA
Posts: 8,716
linuxfirefox
Re: SSD drives under Linux

Quote:
Originally Posted by leigh123linux View Post
There is no discard for ext2, didn't you read 'man mount' ?

Code:
man mount
The mount manpage is horrifically out of date.

Currently ext4, btrfs, fat, gfs2, xfs, and nilfs2 support the "discard" mount option.
So there is no ext2 support for discard.

You can manually trim btrfs, ext3, ext4, ocfs2 and xfs with userspace tools like 'fstrim'.

Why fstrim applies to ext3 and not ext2 is unclear.



Quote:
Originally Posted by StephenH View Post
I'm leery of trying to add the discard option. The SSD I have is not one of the newer ones, and I don't think it supports the trim option.
Not using discard if your controller supports it is really not smart - it means your drive will become terribly slow in time.

Quote:
The SSD is one I originally got for the EeePC. I got an adapter to use it with SATA in an Acer Aspire One 722. The model is SuperTalent FPM64GLSE SATA II Mini PCI-E MLC 64GB
Post #1 describes how to test your drive for TRIM (assuming it's got SMART technology.
Quote:
# su -c 'hdparm -I /dev/sda' | grep TRIM
* Data Set Management TRIM supported
* Deterministic read data after TRIM
The '*' asterisk means the feature is present.

I also doubt your drive supports TRIM.



Quote:
Originally Posted by swimfast View Post
i get "installation stuck" when trying to install on the SSD, anything I can change?

You should start a separate thread indicating the drive and system specifics and exactly when you get the error message.
__________________
None are more hopelessly enslaved than those who falsely believe they are free.
Johann Wolfgang von Goethe
Reply With Quote
  #66  
Old 14th February 2012, 07:18 PM
sloosley Offline
Registered User
 
Join Date: Jul 2006
Location: Out West
Posts: 26
linuxchrome
Re: SSD drives under Linux

I'm trying to implement Trim on an Intel 310 mSata gen2, following the guide in this threat.

1) I verified that the drive supports Trim;

2) Edited my fstab file as follows:
Code:
/dev/mapper/vg_x220linux-lv_root /  ext4    noatime,discard,defaults,user_xattr   1 1
UUID=***** /boot                   ext4    noatime,discard,defaults        1 2
/dev/mapper/vg_x220linux-lv_home /home ext4    noatime,discard,defaults,user_xattr  1 2
/dev/mapper/vg_x220linux-lv_swap swap        swap    defaults        0 0
3) Finally, I rebooted and checked to verify that Trim is in fact working using this guide. (Googling verifies that this guide is correct.)

4) The results, however, are unsuccessful.
* The tempfile created often is all 0s. It should be random characters, not 0s.
* One tempfile that I created was random characters, but after deleting, the space did not return to 0s, even after waiting a considerable time.

The bottom line is (a) either Trim isn't working, OR (b) I'm doing something incorrect. Any advice would be welcomed. Thanks
Reply With Quote
  #67  
Old 1st March 2012, 09:16 AM
stevea Offline
Registered User
 
Join Date: Apr 2006
Location: Ohio, USA
Posts: 8,716
linuxfirefox
SSD drives under Linux

Some Practical Linux SSD drive Techniques
[Notation assumes drive '/dev/sda' is an SSD Flash drive.]


0/ Does your drive support TRIM ? [INFORMATIONAL]
Code:
su -c 'hdparm -I /dev/sda' | grep TRIM
	   *	Data Set Management TRIM supported
	   *	Deterministic read data after TRIM
Asterisk '*' above indicates the feature is present.

[Note: If your SSD drive does not support TRIM, then some Techniques do not apply].

1/ Align File Systems & Partitions. [PERFORMANCE]

Align partitions to at least a multiple of 4KiB (8 - 512B sector) boundaries.
[Note: gparted defaults to 1MiB alignment]

Code:
$ su -c 'gdisk -l /dev/sda'
[...]
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048        63490047   30.3 GiB    0700  Linux/Windows data
   2        63490048        80295935   8.0 GiB     8200  Linux swap
   3        80295936       173000703   44.2 GiB    0700  Linux/Windows data
   4       173000704       234440703   29.3 GiB    0700
Each start sector above is a multiple of 8, so aligned to 4KiB.

2/ Choose a file system type that supports TRIM. [PERFORMANCE,TRIM-only]

The 3.0 kernel fully supports TRIM for the following file systems, when the 'discard' mount option is used.
btrfs, ext4, (v)fat, gfs2, nilfs2, xfs.

The following file systems can use the 'fstrim' utility to perform a trim of of mounted filesystems.

btrfs, ext3, ext4, ocfs2 and xfs.

SWAP block device will use TRIM automatically, without configuration, if available.

The block manager used by LVM and mdadm (software RAID) support TRIM through to the device. (since 2.6.37 kernel)

3/ Know your filesystem. [WEAR, btrfs optimization]

btrfs has an "ssd" mount option that reduces the rate of mirrored superblock & metadata updates.

4/ File System block size of at least 4KiB should be selected. [PERFORMANCE]

btrfs, ext3, ext4, gfs2, nilfs2, xfs - default is 4KiB block size.

ext3,ext4: blocksize is controlled by /etc/mke2fs.conf.
ocfs2: 4KiB is the minimum.
vfat: see mkfs.vfat options -s and -S.

5/ Never use defragmentation on an SSD drive. [WEAR]

SSD defragmentation generates writes which wear the drive, and does not significantly improve performance.

6/ Never use 'shred' or similar utilities to scrub files. [WEAR]

Utilities such as 'shred' overwrite a file repeatedly. SSDs writes are NOT to the same physical block, so these methods cause wear with no advantage. 'rm' is sufficient.

7/ Eliminate "Elevator" disk scheduler. [PERFORMANCE, RELIABILITY]

Rotating disks use the 'elevator' algorithm to schedule block access. This algorithm makes no sense for a SSD media where there are no cylinders and seek time is nearly constant.

Individual drives can use distinct scheduling algorithms by adding the following line to /etc/rc.d/rc.local
echo noop > /sys/block/sda/queue/scheduler

This causes operations to complete more nearly in-order, which can be an advantage for crash recovery.

8/ Reduce SSD Write Activity. [WEAR]
This is a list of methods to reduce the amount of SSD drive writes and therefore reduce wear and increase drive lifespan. Each method has both PROs and CONs. These techniques are listed roughly in order from least controversial, to most controversial.

8A/ Minimize file and directory access-time writes.

POSIX filesystems record access time (atime) for files. Storing access time means that each read or write a file or directory read operation causes an eventual write operation. These access time writes can be reduced in three distinct ways with the 'relatime', 'nodiratime' and 'noatime' mount options.

i) 'relatime' causes the file system to only update the access time to keep it in the correct relative order, and is mount default for recent Fedora. A few legacy applications like 'mutt' require this. This technique causes a moderate reduction in metadata writes.

ii) 'nodiratime' causes the file system to not update access times for directory entry accesses.

iii) 'noatime' prevents atime writes for both directories and files, and effetively eliminates all atime writes.

PRO: Each method reduces writes and therefore drive wear.
CON: Each method reduces the amount of atime information.

RECOMMENDATION: Unless you use mutt or similar, mounting with 'noatime' is strongly recommended for SSD drives.

--
8B/ Reduce log writes to SSD.

The system logs create numerous continuing log files and these can cause voluminous writes. Alternatives are to reduce the volume of messages, write logs to a rotating drive or to send log messages over the net to another system for recording, or to write logs to a ramdisk.

i) Reduce the volume of log messages.
Accomplished by raising the privilege level of messages logged, for example from 'info' and above, to 'warn' and above in the /etc/rsyslog.conf file. See "man rsyslog.conf" for details.

PRO: reduces log volume
CON: reduces log information content

ii) Write logs to a rotating drive.
An fstab entry similar to ....
/dev/sdb4 /var/log ext4 noatime 0 0
causes the device to be mounted on top of /var/log directory, however this mount will occur after rsyslog has already opened some log files. So after the mount (perhaps in /etc/rc.d/rc.local) it is necessary to send rsyslogd a SIGHUP signal to cause it to close and reopen log files.

/bin/kill -s HUP $(cat /var/run/syslogd.pid)

PRO: completely eliminates log writes to SSD
CON: requires a separate rotating disk

iii) Send logs over the net.
There are three transport protocols supported by current rsyslogd; UDP, TCP and RELP. Configuration is discussed in "man rsyslog.conf". The receiving system firewall and rsyslogd must be configured to accept the log messages.

PRO: completely eliminates log writes to SSD
CON: requires a separate log server

iv) Write logs to a ramdisk
An fstab entry similar to ....
none /var/log tmpfs size=10% 0 0
and a /etc/rc.d/rc.local line similar to
/bin/kill -s HUP $(cat /var/run/syslogd.pid)
allocate a maximum of 10% of dram for use by logs.

PRO: completely eliminates log writes to SSD
CON: uses some DRAM. If dram allocation fills, then the logs will cease. On a crash or reboot, all log message are lost.
COMMENT: Despite the considerable CONs this method is practical.

RECOMMENDATION: Technique 'i' is recommended without reservation and technique 'iv' is suggested as an aggressive approach.

--
8C/ Configure browser to use memory caches.

For firefox, use the URL "about:config" and set
browser.cache.disk.enable false
browser.cache.memory.capacity 65536

or
browser.cache.memory.max_entry_size 65536

Similar for chrome browser.

PRO: No apparent performance deficit.
CON: Uses a small amount of DRAM.

RECOMMENDATION: Unqualified recommendation.

--
8D/ Move /tmp to ramdisk

Add this line to /etc/fstab
none /tmp tmpfs size=10% 0 0
where 10% is the maximum fraction of DRAM available to /tmp.

PRO: All /tmp writes avoid SSD wear.
CON: /tmp may fill, /tmp content, tho' generally unimportant is lost after reboot.

RECOMMENDATION: recommended

--
8E/ Consider journals for ext3, ext4.

WARNING: *** DO NOT run without a journal until you understand the potential hazards.

Journals require considerable additional disk I/O so they cause wear and performance degradation.

File system journals allow for clean(er) crash recovery. The boot after an 'unclean' shutdown your linux system will fsck file systems and attempt to mount and proceed automatically. This is generally successful with journaling in place. Running without a journal makes catastrophic file system failure more probable, though still unlikely.

Two approaches:

i) Eliminate journals altogether (*controversial*)
Some filesystems may be easily reconstructed, making risk acceptable.

To eliminate journalling, do this to a unmounted or read-only ext3/ext4
tune2fs -O ^has_journal /dev/sda1

ii) Reduce the amount of journal writes
Mount each SSD ext3 or ext4 with the "data=writeback" option. This means the ext4 will only journal metadata.

Journaling mode of ext3 and ext4 file systems cannot be changed after mount. So 'regular; file systems like /home can have their journalling mode set in the fstab like...

UUID=308[...]d56 /home ext4 discard,noatime,data=writeback 0 0

But the root '/' is mounted by the inintramfs the parameter MUST be included in the grub.cfg configuration file, like,

linux /boot/vmlinuz- ... elevator=noop rootflags=data=writeback

[WARNING: **** for Linux 3.0 and later kernels, setting the ext3/4 root file system option "data=..." in fstab may prevent the remount and may cause boot failure].

RECOMMENDATION: Method 'ii' is safe and has performance and wear advantages. Method 'i' (journal elimination) should only be used when file system reconstruction is available.

(continued)
__________________
None are more hopelessly enslaved than those who falsely believe they are free.
Johann Wolfgang von Goethe

Last edited by stevea; 1st March 2012 at 09:44 AM.
Reply With Quote
  #68  
Old 1st March 2012, 09:17 AM
stevea Offline
Registered User
 
Join Date: Apr 2006
Location: Ohio, USA
Posts: 8,716
linuxfirefox
Re: SSD drives under Linux

(continuation)

9/ Do we really need or want SWAP on an SSD ? [WEAR]
Linux Swap has two important uses; page swapping and suspending to disk as used by hibernate.

When the system is running low on memory and more is requested, the kernel can free some pages by writing these to swap. When there is no swap, or when the swap is filled, the kernel must kill some process and harvest it's pages. Page swapping means that all processes can continue (till swap is filled), but at a very slow pace. Running out of swap means that processes are killed and the system behavior will seem erratic. For an end-user, interactive system the poor swap performance may make the system unusable.

In the case of hibernate, a laptop or mobile device can copy it's memory image to swap and arranges to reload that image at reboot, and removes system power.

RECOMMENDATION:
All - preferentially put swap on a rotating disk if available.
Server - swap is necessary to avoid erratic behavior.
Laptop - if hibernate is needed, then an SSD swap partition is
required.
Interactive PC - swap is practical, but if adequate DRAM is available,
consider using no swap.

Adding this to /etc/sysctl.conf
vm.oom_kill_allocating_task = 1
causes the OOM killer to kill the process requesting extra memory (less erratic).



10/ Using an SSD as a general disk cache. [PERFORMANCE]

See:
http://bcache.evilpiepirate.org/
http://lwn.net/Articles/394672/
http://arstechnica.com/civis/viewtop...f=16&t=1113403

These are Linux kernel patches that allow one to use an SSD driveto as a sort of 'cache' for rotating storage. This dramatically improves the average I/O storage performance of the system.. The goal is to improve system performance, at a cost of substantial SSD drive wear. I have not tested these.

--end
__________________
None are more hopelessly enslaved than those who falsely believe they are free.
Johann Wolfgang von Goethe

Last edited by stevea; 1st March 2012 at 09:21 AM.
Reply With Quote
  #69  
Old 1st March 2012, 09:30 AM
stevea Offline
Registered User
 
Join Date: Apr 2006
Location: Ohio, USA
Posts: 8,716
linuxfirefox
Re: SSD drives under Linux

Background Discussion of SSD drive Issues

Basic FAQ questions are answered here:
http://en.wikipedia.org/wiki/Solid-state_drive
http://en.wikipedia.org/wiki/Flash_memory

Modern flash SSD drives storage have several characteristics.
Code:
PRO:
  Fast read/write speed.
  Ignorably small and constant-valued seek times.
  Very low total power consumption (often <2W peak).
  Lightweight, zero noise.
  Tolerant of harsh environments (temps, shock, vibration) 
CON:
  A limited number of writes per cell.  Current high density MLC chips are rated at only 3000 write cycles !
  Cell errors will develop as the drive is used.
  An entire page (often 128KB to 1MB of storage) must be erased at one time and this is a slow process (10s of milliseconds).
  High cost per unit storage (currently ~$1.50USD/GiB)
An erased page of flash is set to a fixed state (imagine all zero bits). New data may be written anywhere to the page, even in small units. It's possible to change any "zero" bit to a "one, but but once a bit is written to a "one" it cannot be erased to a "zero" except when the entire large page (128KB-1MB) is erased.

The SSD controllers must account for the limitations of high density flash chips including error management, wear leveling, and page-erase issues. Modern SSD controllers implement error detection and correction (EDAC). Some implement a RAID-5 like arrangement among separate flash chips. Because modern SSD controllers have robust error detection it seems likely these drives will fail gracefully, with ample warnings via S.M.A.R.T. parameters. There are some reports that this is not the case.

On a rotating drive the sector numbers for a SATA/PATA bus command for I/O refer to fixed physical sectors on the drive. So sector 0 (the MBR) is always the same physical location on the drive. For flash SSD drives this would cause excessive wear and failure at frequently written sectors. Instead when a bus command causes a write, the location in flash for this data is determined based on some wear-leveling algorithm to avoid excessive writes to one region of flash. In addition the SSD controller must record the 'remap' location the sector since it is not a simple physical mapping.

Drives with wear-leveling are rated in "Total Bytes Written"(TBW) limits and this figure is usually directly proportional to the drive capacity. By late 2009 some high quality drives had a TBW limit of 100 times the drive capacity (for example 6TB of total writes to a 60GB drive). In a recent (11/2011) scan many good quality SSD drives advertise a TBW limit about 600 times drive capacity (e.g. 36TB of writes to a 60GB drive), and a few over 1000 times. Some large capacity SSD drives advertise a lower TBW factor as they are composed of multiple non-redundant units where the unit BTW defines the system BTW.

To avoid long delays while flash pages are erased - the controller must arrange to have clean erased flash pages available. So the controller must at times internally copy off any active data from a 'mostly written' flash page and then erase the page. In early SSD drives (until late 2009 or early 2010) this sort of "garbage collection", freeing up and erasure of pages was a major cause of drive performance slowing with use. In these older drives, most sectors of flash had been written, and the controller was ignorant of which sectors were no longer needed (deleted files, 'dead' data). So when a write occurred the controller needed to copy data off the mapped page, erase the page, modify the one written sector, and write the data back to the page. This is a problem called write amplification.

To solve this problem a set of TRIM bus commands were developed so that the kernel can inform the drive when data sectors become unused, 'dead' or deleted. SSD drive controllers with TRIM support can track which sectors are no longer in use and these controllers will often identify "mostly written, mostly dead" pages and erase these in the background. Even when background erasures aren't possible, under heavy write loads, the TRIM features greatly decreases the write amplification factor. TRIM is a huge performance advantage.

---
Modern SSD controllers often have some have some added functionality.

Most controllers that support TRIM ensure that any TRIM'ed or deleted sector can never be re-read. Instead all zeroes are substituted for the data content after the TRIM. The data may still be present in flash, but the controller refuses to make it available. This is intended as a security feature, so when you delete/TRIM data it is never again readable. The downside is that you cannot recover deleted files once TRIM'ed. The disk scanning tools are useless in such cases.

Many SSD w/ TRIM controllers support disk encryption in hardware. On some drives an external key is provided, but more commonly (today) there is a fixed key that encrypts all the data on the flash chips. IOW the data on the flash chips is quite secure - anything erased/TRIM'ed is effectively destroyed.

Because current SSD drives are low power, and because the drives may be internally shuffling data for page-erasure, etc, many are build with capacitor or 'super cap' energy storage to prevent the possibility of a power outage confusing the drives internal state. The ability to complete all cached write operations reduces, but does not eliminate the need for file system journals.

Although most SSD storage today appear with SATA II or III or other disk storage interface, a growing trend is to place the SSD/flash directly on a system bus like PCI-express. The higher speed, lack of cables and generally simpler interface are advantages.

As of this writing (1/2012) 25nm 64Gb(8GB) flash chips are in mass productions and 20nm 128Gb(16GB) parts have been announced. There is good reason to believe that higher capacity and lower power SSD drives are on the horizon, however price predictions are not as optimistic.

--

Because an SSD drive is merely a circuit board, there are in excess of 100 SSD drive manufacturers today. The main distinguishing features of SSD drives are their controller chips. Be sure to understand the features of the specific controller before investing.
__________________
None are more hopelessly enslaved than those who falsely believe they are free.
Johann Wolfgang von Goethe

Last edited by stevea; 1st March 2012 at 09:48 AM.
Reply With Quote
  #70  
Old 1st March 2012, 10:07 AM
tux4me Offline
Registered User
 
Join Date: Nov 2008
Location: Denmark
Age: 47
Posts: 96
linuxfirefox
Re: SSD drives under Linux

Love this thread, thanks.

Im about to build an i7 based computer, using the Z68 chip.

Then i ran into this information :

Intel's Smart Response Technology on a Z68 motherboard, Implements storage I/O caching to provide users with faster response times for things like system boot and application startup.
Intel's technology recognises frequently accessed data and copies them to the SSD (SSD caching).


Im filled with nOObish questions about this, but will spare you for that, and just hope for a comment on this new use of mSATA SSD's.
Reply With Quote
  #71  
Old 1st March 2012, 11:12 AM
stevea Offline
Registered User
 
Join Date: Apr 2006
Location: Ohio, USA
Posts: 8,716
linuxfirefox
Re: SSD drives under Linux

Quote:
Originally Posted by tux4me View Post
Love this thread, thanks.

Im about to build an i7 based computer, using the Z68 chip.

Then i ran into this information :

Intel's Smart Response Technology on a Z68 motherboard, Implements storage I/O caching to provide users with faster response times for things like system boot and application startup.
Intel's technology recognises frequently accessed data and copies them to the SSD (SSD caching).


Im filled with nOObish questions about this, but will spare you for that, and just hope for a comment on this new use of mSATA SSD's.
The mSata SSD's I've seen seem to have rather slow write times. I think this speaks to a simpler controller and less on-board dram on the tiny form factor boards. Aside from that the look like other SSD drives.

It appears that the Smart Response Technology (SRT) is just a software driver locked to the Z68.
http://www.anandtech.com/Show/Index/...caching-review

So the kernel patches mentioned above (item 10/) seems a more general approach. bcache and Flashcache are both under development, but neither is ready for general use (AFAIK).
__________________
None are more hopelessly enslaved than those who falsely believe they are free.
Johann Wolfgang von Goethe
Reply With Quote
  #72  
Old 1st March 2012, 12:21 PM
tux4me Offline
Registered User
 
Join Date: Nov 2008
Location: Denmark
Age: 47
Posts: 96
linuxfirefox
Re: SSD drives under Linux

Thanks, and sorry for my poor research .... my mind hoped that it was hardware based :/
Further .... Seeing that it is only for the Z68 on Windows, makes me blush... wasting your time.

Well im getting it for the Graphics, so no loss there. Thanks for clearing things.
Reply With Quote
  #73  
Old 11th March 2012, 03:30 AM
PaulBx1 Offline
Registered User
 
Join Date: Mar 2012
Location: Western US
Posts: 13
linuxfirefox
Re: SSD drives under Linux - discussion thread

Thanks for this thread, very interesting. A few comments...

1) /tmp
I was running Puppy Linux for a long time which had a very small tmpfs for /tmp. It would hang when watching long youtubes. Someone discovered a fix: get out of X (ctrl-backspace), mount some unused disk partition at /tmp, then restart X. Then we could watch those long youtubes. A question I always had about it but never had the knowledge to answer was this: Is the disk partition REPLACING the small tmpfs or is it adding to it, such that the tmpfs is used first and when full, the data then goes to the disk partition? If the latter, this suggests a strategy: Have a tmpfs-based /tmp for performance and for keeping extraneous writes off the SSD, but have a second SSD-based /tmp for taking that occasional long download and generally for avoiding locks. Might be worth investigating. Of course if you know there is some giant file in /tmp that you are now done with, you can simply manually delete it, thus making /tmp accesses go back into the tmpfs again (I think).

2) Alignment
I have read this alignment stuff until I am blue in the face. There is added complication with LVM and encryption. First, is there a way, some tool, to really discover what sector the data starts on when you have LVM over LUKS? Second, I have seen some people worrying about the LVM header being 192k. Most people I have read mentioning this header say you can make it larger using the "pvcreate --dataalignment" parameter, kicking it up to 256k or 512k so your data is still aligned. But that is people who think something greater than a 4k alignment (as advocated here) is needed. 192k is actually divisible by 4096 so maybe there is nothing to worry about? And who knows what the installer does with the pvcreate command... For cryptsetup people advocate using "cryptsetup --align-payload", again apparently to get around the header shifting things off by some amount. Again, big question what the installer does. I don't know what to think of these concerns. In fact I'm still not convinced alignment is a problem at all, beyond (perhaps) a 4k cluster-based alignment (or whatever the OS minimum disk IO size is). But unless there is a way to find where the data areas actually start on disk (after being dicked with by LUKS and LVM) there is no way to see if we really have alignment or not, and to what size.

3) Swap
I have run Puppy Linux for years without swap. Swap is kinda old-fashioned tech back from when memory was expensive. Maybe people ought to buy some more memory and just turn swap off.

4) Journaling
I also ran Puppy for years on ext2. I guess journals are sometimes looked at an excuse not to do backups! But if you have a UPS and are careful with backups, maybe you don't need it. Particularly if you also have spinning storage and can routinely and frequently do backups to it in the background.

Last edited by PaulBx1; 11th March 2012 at 03:47 AM.
Reply With Quote
  #74  
Old 11th March 2012, 04:11 AM
jpollard Online
Registered User
 
Join Date: Aug 2009
Location: Waldorf, Maryland
Posts: 6,788
linuxfirefox
Re: SSD drives under Linux - discussion thread

#1) tmpfs is memory - the more you allocate to it, the less main memory you have. And deadlocks do happen.

#3) If you suspend, you need a swap large enough to hold it. Otherwise you can't suspend.

#4) Journaling is for failures - power, static, whatever can cause an unexpected shutdown. It doesn't matter the filesystem - if a journal is available, you use it to prevent filesystem corruption, not to avoid backups.

And UPS units do fail - as do power supplies. Cables can come loose...
Reply With Quote
  #75  
Old 11th March 2012, 04:20 AM
PaulBx1 Offline
Registered User
 
Join Date: Mar 2012
Location: Western US
Posts: 13
linuxfirefox
Re: SSD drives under Linux - discussion thread

Wow, no kidding? tmpfs is memory? I never would have guessed.
Reply With Quote
Reply

Tags
drives, linux, ssd

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Fedora 16 Alpha discussion thread AdamW F16 Development 52 1st September 2011 06:06 PM
vendor discussion -- split from another thread. Dan Wibble 8 21st March 2010 10:03 AM
Secondary bulldozer thread discussion Dan Wibble 18 26th February 2010 08:00 PM
The Z softs discussion thread spacefrog Security and Privacy 0 16th August 2009 01:43 AM


Current GMT-time: 11:29 (Thursday, 31-07-2014)

TopSubscribe to XML RSS for all Threads in all ForumsFedoraForumDotOrg Archive
logo

All trademarks, and forum posts in this site are property of their respective owner(s).
FedoraForum.org is privately owned and is not directly sponsored by the Fedora Project or Red Hat, Inc.

Privacy Policy | Term of Use | Posting Guidelines | Archive | Contact Us | Founding Members

Powered by vBulletin® Copyright ©2000 - 2012, vBulletin Solutions, Inc.

FedoraForum is Powered by RedHat