FedoraForum.org

FedoraForum.org (http://forums.fedoraforum.org/index.php)
-   Guides & Solutions (No Questions) (http://forums.fedoraforum.org/forumdisplay.php?f=12)
-   -   SSD drives under Linux - discussion thread (http://forums.fedoraforum.org/showthread.php?t=256068)

stevea 29th June 2011 11:32 AM

Re: SSD drives under Linux
 
Zenous - normally an SSD drive will be recognized just as any other disk.
Try running gparted to create/view the partition table. or
ls -l /dev/disk/by-id
to see the recognized disks.

---------- Post added at 06:32 AM ---------- Previous post was at 06:15 AM ----------

Quote:

Originally Posted by leigh123linux (Post 1485477)
data=writeback caused issues here.

That's puzzling. I've been using:
Quote:

UUID=ffc6a6fc-1bf3-4f25-ab17-067b0515e85a / ext4 noatime,discard,data=writeback 1 2
UUID=24941fe7-6d23-4969-951e-0872d4b81b0e /home ext4 noatime,discard,data=writeback 1 2
UUID=2bbebe51-c0f8-4296-ba0a-e715bf84dc67 swap swap defaults 0 0
without issues for ~7 months. When booting The newest kernels .... 3.0.0-rc2 and rc3 there are some remount issues - but this is unrelated to SSD.


No synic, "discard" does not imply any "data=..." journal policy.

leigh123linux 29th June 2011 01:41 PM

Re: SSD drives under Linux
 
Quote:

Originally Posted by stevea (Post 1490321)
Zenous - normally an SSD drive will be recognized just as any other disk.
Try running gparted to create/view the partition table. or
ls -l /dev/disk/by-id
to see the recognized disks.

---------- Post added at 06:32 AM ---------- Previous post was at 06:15 AM ----------



That's puzzling. I've been using:


without issues for ~7 months. When booting The newest kernels .... 3.0.0-rc2 and rc3 there are some remount issues - but this is unrelated to SSD.


No synic, "discard" does not imply any "data=..." journal policy.

It was a few months back when I was using the default and 2.6.27 kernels, since then I haven't tried it.


P.S rc5 is out ;)

Code:

[leigh@main_pc ~]$ uname -r
3.0-0.rc5.git0.1.el6.x86_64
[leigh@main_pc ~]$


stevea 29th June 2011 03:40 PM

Re: SSD drives under Linux - efficient partioning
 
Quote:

Originally Posted by synic (Post 1484144)
...

I have a thread F15 - 64g SSD - partitioning / space usage suggestion and am wondering about the best performance I can get out of a 64g SSD?

I know I need swap, / and /home but am wondering about the location of /var and /usr and /tmp?

Swap will use TRIM(discard) automatically. Swap blocks are used if you system runs low on DRAM (and then it may slow the system dramatically). The swap partition is also used for hibernate (mostly on laptops).

The PROs for swap on SSD is that it will have about the best performance possible in the bad situation when you are low on DRAM, but it still will be slow. The CONs are that you are wearing your SSD drive. As a general philosophy I will suggest that if your (nonserver) system swaps often it's time to buy more DRAM. Swaping performance will be poor in any case, so it's reasonable to put swap on a non-SSD volume (if you have one). We probably don't care much about optimizing I/O if we are swapping memory pages.

On F15 /var/log and /tmp can be made from tmpfs with fstab lines like
Quote:

tmpfs /tmp tmpfs defaults 0 0
tmpfs /var/log tmpfs defaults 0 0
If you have another non-SSD drive then this would be a good location for the /tmp and the entire /var directory.

/usr contains libraries and binaries where you want a fast disk - so this belongs on SSD.


Quote:

Silly question maybe but can I specify to use /tmp in swap?
No - the entire swap may be used for paging. Better to put /tmp on a tmpfs.

Quote:

Or is this post over at SSDReviewForums specifying the exact issue? It looks very interesting and I think I understand the basics but I have to admit it is all starting to look double dutch to me. If I am going to specify sending /tmp to /tmpfs, and with only 2 gig ram, then am I going to want to specify a much larger swap than the 3 gig I have already partitioned? Maybe even 4 or 5 gig. I don't really care about the life span of a 64g ssd. I just want to wring as much system speed out of it as possible.
The methods suggested on that page are questionable. They push /tmp to tmpfs(good), but then they but then they use noatime on /tmp (on tmpfs) which doesn't save any I/O, just saves a couple memory accesses. They use the deadline scheduler rather than noop - which seems a wasteful calculation in the scheduler.

/tmp has many trivially small files and also acts as temporary store for browser downloads (.pdfs, .jpg, and even streaming vids). Altogether these rarely add up to more than a 100MB (ymmv). Since the tmpfs is dram, then /tmp is deleted/cleaned at every reboot. You could also use the "size=10%" tmpfs option to prevent it from getting too large. It's your choice A/ limit /tmp on tmpfs size and accept possible ".tmp is full" errors, B/ let /tmp on tmpfs fill (rare) causing potential swapping and potential "out of swap" OOM kills, C/ leave /tmp on SSD and accept slower speed (than DRAM) and more SSD wear, but little chance of filling.

If you are looking for speed you'd better buy some DRAM at the first signs of swapping.

---------- Post added at 10:40 AM ---------- Previous post was at 09:00 AM ----------

Quote:

Originally Posted by leigh123linux (Post 1490368)
It was a few months back when I was using the default and 2.6.27 kernels, since then I haven't tried it.


P.S rc5 is out ;)

Same problem applies to 3.0.0-rc5. Here is the issue.

You cannot set the journal mode (the data=... parameter) for an ext3 or ext4 file system once it is mounted by using a remount option. You actually have to umount and then mount. In 2.6.35... kernels this prohibition was ignored if the filesys is mounted read-only. On 3.0 kernels this restriction is enforced causing the remount to fail.

So if you put a "data=writeback" in fstab for "/" then any remount (that picks up the fstab params) will fail with a 3.0 kernel.

Normally during boot the initramfs mounts root read-only and with any parameters from the kernel command line. Then after the pivot a command like:
mount -o remount,rw /
occurs. But this form of the command picks up the fstab parameters. If fstab has "data=..." for root and the kernel is 3.0 then the remount fails and root remains read-only. This causes many things to fail.

The correct solutionis to specify the root file system journal mode in the /boot/grub/grub.conf file on the kernel line, by appending:
kernel .... rootflags=data=writeback
and removing any "data=..." from the fstab for the root file system only.

===

SYNOPSIS:
DO NOT use "data=..." options in /etc/fstab for the root "/" file system.
INSTEAD use "rootflags=data=..." appended to kernel line in grub.conf

zenous 30th June 2011 03:58 AM

Re: SSD drives under Linux
 
Quote:

Originally Posted by stevea (Post 1490321)
Zenous - normally an SSD drive will be recognized just as any other disk.
Try running gparted to create/view the partition table. or
ls -l /dev/disk/by-id
to see the recognized disks.[COLOR="Silver"]

it detect my ocz ssd but why i cannot install on the disk?

lrwxrwxrwx. 1 root root 9 Jun 30 10:49 ata-OCZ-VERTEX2_OCZ-1N916OJ8CDPH29KU -> ../../sda
lrwxrwxrwx. 1 root root 10 Jun 30 10:49 ata-OCZ-VERTEX2_OCZ-1N916OJ8CDPH29KU-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Jun 30 10:49 dm-name-live-osimg-min -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Jun 30 10:49 dm-name-live-rw -> ../../dm-0
lrwxrwxrwx. 1 root root 9 Jun 30 10:49 scsi-SATA_OCZ-VERTEX2_OCZ-1N916OJ8CDPH29KU -> ../../sda
lrwxrwxrwx. 1 root root 10 Jun 30 10:49 scsi-SATA_OCZ-VERTEX2_OCZ-1N916OJ8CDPH29KU-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 9 Jun 30 10:49 usb-_USB_FLASH_DRIVE_07B30507078257CF-0:0 -> ../../sdb
lrwxrwxrwx. 1 root root 10 Jun 30 10:49 usb-_USB_FLASH_DRIVE_07B30507078257CF-0:0-part1 -> ../../sdb1
lrwxrwxrwx. 1 root root 9 Jun 30 10:49 wwn-0x5e83a97f925f522f -> ../../sda
lrwxrwxrwx. 1 root root 10 Jun 30 10:49 wwn-0x5e83a97f925f522f-part1 -> ../../sda1

btw do i have to set the sata mode to ide? currently it set as ahci
how do i enable ahci for ssd?

stevea 30th June 2011 01:53 PM

Re: SSD drives under Linux
 
Quote:

Originally Posted by zenous (Post 1490628)
it detect my ocz ssd but why i cannot install on the disk?

[...]

btw do i have to set the sata mode to ide? currently it set as ahci
how do i enable ahci for ssd?

AHCI are advanced features including command queueing stategy that must be set in the BIOS.

If you change from AHCI to no-AHCI the disk contents may become inaccessible -similar to what you have shown. All SSDs I'm aware of support AHCI, some require AHCI to perform firmware updates and other operations.

I suggest you set AHCI mod in BIOS the nuse gparte or similar to re-format, set a new partition table, etc.

zenous 1st July 2011 12:47 AM

Re: SSD drives under Linux
 
Quote:

Originally Posted by stevea (Post 1490755)
AHCI are advanced features including command queueing stategy that must be set in the BIOS.

If you change from AHCI to no-AHCI the disk contents may become inaccessible -similar to what you have shown. All SSDs I'm aware of support AHCI, some require AHCI to perform firmware updates and other operations.

I suggest you set AHCI mod in BIOS the nuse gparte or similar to re-format, set a new partition table, etc.

i will try with gparted today
tq

synic 9th July 2011 07:48 AM

Re: SSD drives under Linux - efficient partioning
 
Quote:

Originally Posted by stevea (Post 1490374)
On F15 /var/log and /tmp can be made from tmpfs with fstab lines like

Better to put /tmp on a tmpfs.

The methods suggested on that page are questionable. They push /tmp to tmpfs(good), but then they but then they use noatime on /tmp (on tmpfs) which doesn't save any I/O, just saves a couple memory accesses. They use the deadline scheduler rather than noop - which seems a wasteful calculation in the scheduler.

/tmp has many trivially small files and also acts as temporary store for browser downloads (.pdfs, .jpg, and even streaming vids). Altogether these rarely add up to more than a 100MB (ymmv). Since the tmpfs is dram, then /tmp is deleted/cleaned at every reboot. You could also use the "size=10%" tmpfs option to prevent it from getting too large. It's your choice A/ limit /tmp on tmpfs size and accept possible ".tmp is full" errors, B/ let /tmp on tmpfs fill (rare) causing potential swapping and potential "out of swap" OOM kills, C/ leave /tmp on SSD and accept slower speed (than DRAM) and more SSD wear, but little chance of filling.

Should I ad the line:

Code:

none /tmp tmpfs defaults 0 0
to my fstab before specifying

Code:

tmpfs /tmp tmpfs defaults 0 0
tmpfs /var/log tmpfs defaults 0 0

Or will fstab recognise that as the same thing, effectively negating the need for
Code:

none /tmp tmpfs defaults 0 0
AND

Is specifying

Code:

tmpfs /var/log tmpfs defaults 0 0
in fstab going to impact the rest of the /var directory in a negative manner? Or is that effectively telling the kernel to not write to /var/log on disk, just write to it in ram?

jpollard 9th July 2011 03:52 PM

Re: SSD drives under Linux
 
Putting /var/log on tmpfs will use up your memory, and can cause a deadlock when it fills due to some error, or external attack (on something that logs attempted intrusions), or even bad network connections.

The next problem is that if you are forced to boot due to something, you have no records for what it was. This can happen due to bad updates that cause X to fail - It can generate a LOT of log entries as things go south. Without the logs you won't be able to fix it.

stevea 9th July 2011 07:45 PM

Re: SSD drives under Linux
 
Please read post #5, 8, 18, 19. You have now repeatedly brought up the same off-topic comment in a sticky thread about configuration for SSDs.

Quote:

Originally Posted by jpollard (Post 1493668)
Putting /var/log on tmpfs will use up your memory, and can cause a deadlock when it fills due to some error, or external attack (on something that logs attempted intrusions), or even bad network connections.

No - Filling the mountpoint for /tmp or /var/log does NOT cause a deadlock (tested). Despite your opinion, putting such directories on tmpfs has been long and widely practiced in Unix/Linux/BSD. The obvious limitations apply (loss of data at umount), use of dram. Example: this status does not prevent login or cause any deadlock.

Quote:

[root@crucibulum var]# df
Filesystem 1K-blocks Used Available Use% Mounted on
...
none 79876 79876 0 100% /tmp
none 79876 79876 0 100% /var/log

...


Quote:

The next problem is that if you are forced to boot due to something, you have no records for what it was. This can happen due to bad updates that cause X to fail - It can generate a LOT of log entries as things go south. Without the logs you won't be able to fix it.
No - you are not force to reboot. You can either review the logs from a virtual console OR you can revise fstab, reboot and review the new logs from SSD. You ignore that noisy logs, (noisy X, noisey vid drivers) can write megabytes of fluff per hour. You don't want that on your SSD.

I agree that pushing data onto tmpfs is not ideal for every application, as it comes with at the OBVIOUS cost of losing data at umount and using some memory. But you seem to ignore the main issue that writing to SSD wears out a very expensive resource early. Using DRAM for /tmp and /var/log (and /var/run and /var/tmp) is a not unreasonable approach to avoiding excessive SSD writes. Google will show you that many sources (including fedora developers and kernel developers) have addressed this, including adding SEL contexts to tmpfs.

I'll also suggest you study the F15 mount defaults which use tmpfs for many, admittedly tiny directories ...
Quote:

[root@lycoperdon ~]# df -h | grep tmpfs
tmpfs 3.0G 11M 3.0G 1% /dev/shm
tmpfs 3.0G 684K 3.0G 1% /run
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
tmpfs 3.0G 684K 3.0G 1% /var/lock
tmpfs 3.0G 684K 3.0G 1% /var/run
tmpfs 3.0G 0 3.0G 0% /media


If you can propose a BETTER way to avoid SSD writes please post.

If you just want to raise the same tired, tangential objection for a 4th time - that tmpfs uses memory, memory use can cause swap, swap overflow causes OOM, don't. Running out of swap is always a possibility; it's not directly relevant to SSD, go start another thread.

jpollard 10th July 2011 12:28 AM

Re: SSD drives under Linux
 
Steve, I've seen it deadlock. Granted not every time. Guess what happens to Fedora 15 if you don't have enough memory? It deadlocks.

I've also had X totally lock to the point that you cannot change virtual terminals. Especially when the configuration is totally hosed. The only way to recover is to reboot. And if the logs are on tmpfs, they are gone.

Modeler 13th July 2011 01:28 PM

Re: SSD drives under Linux
 
Thanks for this information stevea; very useful and interesting. I just have one question (for now):

Quote:

No file-system atop an LVM block manager or a RAID block manager can support TRIM at this time. Therefore you should not use LVM nor RAID (mdadm) atop an SSD for optimal performance unless you investigate the issue.
Fedora (or rather anaconda / disk druid) puts the root file system and swap on LVM volumes by default and many users would not alter the layout. If we install a new copy of Fedora on an SSD, we should create all our file systems on /dev/sda (or equivalent) for TRIM to work?

Thanks!

stevea 28th July 2011 08:15 AM

Re: SSD drives under Linux
 
Quote:

Originally Posted by Modeler (Post 1494823)
Thanks for this information stevea; very useful and interesting. I just have one question (for now):

....

Fedora (or rather anaconda / disk druid) puts the root file system and swap on LVM volumes by default and many users would not alter the layout. If we install a new copy of Fedora on an SSD, we should create all our file systems on /dev/sda (or equivalent) for TRIM to work?

Thanks!

The comment about LVM not supporting TRIM does not apply to kernels after the 2.6.37 kernel (see post #22).

It's my opinion that LVM is not a good choice for most end-users. It makes sense on a server with multiple disks.

---------- Post added at 03:15 AM ---------- Previous post was at 02:42 AM ----------

Quote:

Originally Posted by jpollard (Post 1493782)
Steve, I've seen it deadlock. Granted not every time. Guess what happens to Fedora 15 if you don't have enough memory? It deadlocks.

What you are describing would a kernel bug. If you really believe this malarkey (filling /tmp causes deadlock) then any user without quotas can deadlock a system.

Report it on LKML - it is not specific to SSD drives and does not belong in this thread.


Quote:

I've also had X totally lock to the point that you cannot change virtual terminals. Especially when the configuration is totally hosed. The only way to recover is to reboot. And if the logs are on tmpfs, they are gone.
Then you either have a found an X bug (please bugzilla it) or you've run out off swap and the OOM reaper is killing processes. There is never enough swap to prevent an OOM condition (except see cgroups and quotas). Beyond the casual notice that - yes some processes need to write disk, and yes the system needs DRAM+swap - this is generic fact and unrelated to SSDs.

jpollard 14th September 2011 06:24 PM

Re: SSD drives under Linux
 
Yes. And that last is also what you called a kernel bug.

The system deadlocks during boot because there ISN'T enough memory.

Adding a swap area early would usually cure it, but systemd can't seem to do that before killing the system.

IT doesn't matter whether you have SSD or not. putting logs in tmpfs is the problem. Whether it is an X bug or not, a reboot looses the logs you need to identify if it IS an X bug.

And also yes, users without quotas can kill the the system, though it can be quite tricky.

SwampKracker 19th November 2011 07:45 AM

Re: SSD drives under Linux
 
Excellent guide, stevea. If I ever do make the jump to an SSD, this is the guide I will use.

fingerhut 23rd November 2011 08:01 AM

Re: SSD drives under Linux
 
if you want to reduce I/O you can also put chromium/opera cache into ram (put it into fstab):

Code:

cache-chromium /home/user/.cache/chromium tmpfs defaults,noatime,mode=1777 0 0


All times are GMT +1. The time now is 07:34 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.