I have a development server that has several disks. One of them is an SSD (OCZ Vertex 30GB) and is used as the / partition (which is 26GB) and the other disks (several big ones) are mounted on various mount points.
Here is the mount output:
/dev/sde3 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sde1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md3 on /var/storage type ext3 (rw)
/dev/md2 on /var/archive type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
gvfs-fuse-daemon on /home/xxxxx/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=xxxxx)
/dev/sde3 is the one I'm interested about. The problem is that the filesystem reports that the used space is 19.8GB and the free is 6.5 (which i know that is incorrect) for a total of 26.3GB.
Running Disk Usage Analyser as root, setting the options only to scan /dev/sde3 and perfom a full filesystem scan I get this result:
Note the "total filesystem capacity used" which is 19.8GB and the actual total of the filesystem scan on / which is only 14.2GB and much closer to reality.
The difference is huge so it cannot be due to decimal Gigabyte (1000) -> binary gigabyte (1024) conversion.
Can anyone help me out and tell me where my space has gotten to?