I have three (sometimes 4) regularly used Systems at home - all Linux. Then there are a handful of other devices running Linux - from NAS, Chumby, Zipit, Routers ....
So I use a 24x7 server to provide common file access (beyond the NAS), and other services. I've been using NFS then NFSv4 for this SOHO file service for some time. However NFS is a heavy sluggish protocol with poor performance tho' a level of available security I don't need behind the firewall. Actually Samba has a little better performance - but still not great. I've played with alternative networked file systems for some time, and Ceph and Gluster sorted themselves to the top of the heap. GFS2 has great performance, but is a monster to configure and maintain.
Recently I've been toying with gluster again. The config tool on F14 was very primitive, but has improved greatly at the 3.2 revision. It seems that RHEL is moving toward gluster for enterprise and may eventually dump GFS2. So after RH bought up the gluster project the rate of development has been impressive. More important I tested gluster back ~F13 and the performance was mediocre, better than NFS but not great. Yesterday Iupdated (reinstalled) the server to F17 and configures gluster in place of my normal NFS share. On some PRELIMINARY tests gluster performance is stunning !
So I have the newly updated F17 server and an F16 client workstation on a GigE LAN. nuttcp measures tcp traffic between the two at 940Mbit/sec, near wire rate. So I performed some crude fs speed tests on the shared directory - and from the native system, I was accessing the local file system native (ext4) at just under 95MB/s. That in itself is a stunning testament to the goodness of ext4. The disk involved, a WD black 1TB, measures
# hdparm -tT /dev/sda5
Timing cached reads: 9474 MB in 2.00 seconds = 4739.09 MB/sec
Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec
95MB/sec of any sort of fs throughput on a drive that does large contiguous reads at 122.5MB/s says the filesystem works really well.
So when I did the same test on the client across the GigE I expected a great performance drop b/c the protocol overhead eats up so much. Instead I got a 86.4MB/sec or only 10% slower than the local file system. That's terrific.
So be fair this test is mostly reads and a mix of filesize dominated in size by larger >3MB) files. I need to run bonnie++, but unless Gluster fails totally at small files - it's a huge win.
Gluster is intended as a clusted filesystem, with multiple servers, and potentially replication and striping. Since I have a single server I've only created the trivial "one peer" configuration.
I'll add some notes later - but so far this looks like a NFS killer for my needs.
---------- Post added at 06:15 PM ---------- Previous post was at 05:59 PM ----------
Gluster server seems to need rpc service, tho' isuspect it's b/c there is some NFS support conftionality that might be configured out.
On the server ...
yum -y install glusterfs glusterfs-server
systemctl enable glusterd.service
systemctl start glusterd.service
systemctl enable glusterfsd.service
systemctl start glusterfsd.service
gluster volume create sohocommon transport tcp hypoxylon:/home/common
gluster volume start sohocommon
'sohocommon' is just a name for the 'brick' of shared storage.
The server needs to punch firewall holes fr port 111/tcp (rpcbind) and 24007-24011/tcp (gluster).
On the client it's just
yum -y install glusterfs glusterfs-fuse
echo "hypoxylon:sohocommon /home/common glusterfs _netdev,rw,noatime 0 0" >> /etc/fstab