I found this thread while trying to work out what values to use (and which to avoid). Apparently, and contrary to popular belief in some forums, the timeouts are not chronological. From man hdparm:
The encoding of the timeout value is somewhat peculiar. A value of zero means "timeouts are disabled": the device will not automatically enter standby mode. Values from 1 to 240 specify multiples of 5 seconds, yielding timeouts from 5 seconds to 20 minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, yielding timeouts from 30 minutes to 5.5 hours. A value of 252 signifies a timeout of 21 minutes. A value of 253 sets a vendor-defined timeout period between 8 and 12 hours, and the value 254 is reserved. 255 is interpreted as 21 minutes plus 15 seconds. Note that some older drives may have very different interpretations of these values.
In order to verify it, I created a script. It simply checks the hdparm -C status of the disk(s) specified at regular, changeable intervals (default 15 seconds) and adjusts for the uptime before the script was run. It is therefore not a management tool, but more like a measurement. Its approximation depends on the interval size (15 seconds or less is pretty accurate).
You can find it here: https://gitorious.org/check-disk-spindown/sh
Sample output for my 4 disk RAID:
/dev/sdb estimated spindown time: 0 hours, 29 minutes and 46 seconds.
/dev/sdc estimated spindown time: 0 hours, 29 minutes and 46 seconds.
/dev/sdd estimated spindown time: 0 hours, 29 minutes and 46 seconds.
/dev/sde estimated spindown time: 0 hours, 29 minutes and 46 seconds.