FedoraForum.org - Fedora Support Forums and Community
Results 1 to 3 of 3
  1. #1
    Join Date
    Dec 2012
    Location
    santa barbara, CA
    Posts
    883
    Linux (Fedora) Firefox 60.0

    iostat svctm and %busy numbers are wrong for NVME drives

    Hi guys,

    I just created a RAID0 out of 2 NVME's that I got in a server, and put a database on it,
    doing iostat -xd 5 , been watching it for a while and I see numbers like:
    Code:
    nvme1n1          0.00    0.40      0.00      2.40     0.00     0.20   0.00  33.33    0.00    0.00   0.67     0.00     6.00 1671.00  66.84
    nvme0n1          0.00    0.60      0.00      6.40     0.00     0.00   0.00   0.00    0.00    0.00   0.92     0.00    10.67 1533.67  92.02
    md0              0.00    1.80      0.00      8.80     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     4.89   0.00   0.00
    but this cannot be right, as the response of the database is sub-millisecond. So, how come the svctm seen here is 1.5 seconds on one, and 1.6 seconds on the other stick ?

    hmmmm
    "monsters John ... monsters from the ID..."
    "ma vule teva maar gul nol naya"

  2. #2
    Join Date
    Aug 2016
    Location
    Dallas
    Posts
    68
    Linux Chrome 67.0.3396.87

    Re: iostat svctm and %busy numbers are wrong for NVME drives

    Are you just trying to understand your disk latency? From what I understand svctm is deprecated. (From the man page)


    The average service time (in milliseconds) for I/O requests that were issued to the device. Warning! Do not trust this field any more. This field will be removed in a future sysstat version.

    I've always used await to determined I/O serve time

  3. #3
    Join Date
    Dec 2012
    Location
    santa barbara, CA
    Posts
    883
    Linux (Fedora) Firefox 60.0

    Re: iostat svctm and %busy numbers are wrong for NVME drives

    Quote Originally Posted by rexrf
    Are you just trying to understand your disk latency? From what I understand svctm is deprecated. (From the man page)


    The average service time (in milliseconds) for I/O requests that were issued to the device. Warning! Do not trust this field any more. This field will be removed in a future sysstat version.

    I've always used await to determined I/O serve time
    Cheers, and thanks.
    I have no latency whatsoever, but these numbers were saying I do lol.
    I guess I will go to w_wait and r_wait then (await not available here)
    But what worries me is the last column, %util
    Code:
    nvme0n1          0.00    2.60      0.00     20.00     0.00     0.20   0.00   7.14    0.00    0.23   0.95     0.00     7.69 364.85  94.86
    95% lol. the thing ain't doing much.
    I guess it's time to email the sysstat dude again.
    "monsters John ... monsters from the ID..."
    "ma vule teva maar gul nol naya"

Similar Threads

  1. iostat constantly shows 100% utilization on md0
    By WhatStandards in forum Servers & Networking
    Replies: 0
    Last Post: 9th May 2015, 07:19 PM
  2. How to treat iostat?
    By sluge in forum Using Fedora
    Replies: 3
    Last Post: 6th August 2012, 06:49 AM
  3. F13 is aware of cdrom drives but something is wrong.
    By rschilling in forum Installation, Upgrades and Live Media
    Replies: 2
    Last Post: 3rd July 2010, 07:54 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •