Safe Haskell | Safe-Inferred |
---|---|
Language | Haskell2010 |
tick
uses the rdtsc chipset to measure time performance of a computation.
The measurement unit - a Cycle
- is one oscillation of the chip crystal as measured by the rdtsc instruction which inspects the TSC register.
For reference, a computer with a frequency of 2 GHz means that one cycle is equivalent to 0.5 nanoseconds.
Synopsis
- type Cycle = Word64
- tick_ :: IO Cycle
- warmup :: Int -> IO Double
- tick :: NFData b => (a -> b) -> a -> IO (Cycle, b)
- tick' :: NFData b => (a -> b) -> a -> IO (Cycle, b)
- tickIO :: NFData a => IO a -> IO (Cycle, a)
- tickNoinline :: NFData b => (a -> b) -> a -> IO (Cycle, b)
- ticks :: NFData b => Int -> (a -> b) -> a -> IO ([Cycle], b)
- ticksIO :: NFData a => Int -> IO a -> IO ([Cycle], a)
- ns :: (a -> IO ([Cycle], b)) -> [a] -> IO ([[Cycle]], [b])
- tickWHNF :: (a -> b) -> a -> IO (Cycle, b)
- tickWHNF' :: (a -> b) -> a -> IO (Cycle, b)
- tickWHNFIO :: IO a -> IO (Cycle, a)
- ticksWHNF :: Int -> (a -> b) -> a -> IO ([Cycle], b)
- ticksWHNFIO :: Int -> IO a -> IO ([Cycle], a)
Documentation
>>>
import Perf.Cycle
>>>
import Control.Monad
>>>
import Data.Foldable (foldl')
>>>
let n = 1000
>>>
let a = 1000
>>>
let f x = foldl' (+) 0 [1 .. x]
tick_ measures the number of cycles it takes to read the rdtsc chip twice: the difference is then how long it took to read the clock the second time.
Below are indicative measurements using tick_:
>>>
onetick <- tick_
>>>
ticks' <- replicateM 10 tick_
>>>
manyticks <- replicateM 1000000 tick_
>>>
let average = L.fold ((/) <$> L.sum <*> L.genericLength)
>>>
let avticks = average (fromIntegral <$> manyticks)
one tick_: 78 cycles next 10: [20,18,20,20,20,20,18,16,20,20] average over 1m: 20.08 cycles 99.999% perc: 7,986 99.9% perc: 50.97 99th perc: 24.99 40th perc: 18.37 [min, 10th, 20th, .. 90th, max]: 12.00 16.60 17.39 17.88 18.37 18.86 19.46 20.11 20.75 23.04 5.447e5
The distribution of tick_ measurements is highly skewed, with the maximum being around 50k cycles, which is of the order of a GC. The important point on the distribution is around the 30th to 50th percentile, where you get a clean measure, usually free of GC activity and cache miss-fires
warmup :: Int -> IO Double Source #
Warm up the register, to avoid a high first measurement. Without a warmup, one or more larger values can occur at the start of a measurement spree, and often are in the zone of an L2 miss.
>>>
t <- tick_ -- first measure can be very high
>>>
_ <- warmup 100
>>>
t <- tick_ -- should be around 20 (3k for ghci)
tick :: NFData b => (a -> b) -> a -> IO (Cycle, b) Source #
`tick f a` strictly evaluates f and a, then deeply evaluates f a, returning a (Cycle, f a)
>>>
_ <- warmup 100
>>>
(cs, _) <- tick f a
Note that feeding the same computation through tick twice may kick off sharing (aka memoization aka let floating). Given the importance of sharing to GHC optimisations this is the intended behaviour. If you want to turn this off then see -fno-full-laziness (and maybe -fno-cse).
tick' :: NFData b => (a -> b) -> a -> IO (Cycle, b) Source #
tick where the arguments are lazy, so measurement may include evaluation of thunks that may constitute f and/or a
tickIO :: NFData a => IO a -> IO (Cycle, a) Source #
measures and deeply evaluates an `IO a`
>>>
(cs, _) <- tickIO (pure (f a))
ticks :: NFData b => Int -> (a -> b) -> a -> IO ([Cycle], b) Source #
n measurements of a tick
returns a list of Cycles and the last evaluated f a
GHC is very good at finding ways to share computation, and anything measuring a computation multiple times is a prime candidate for aggresive ghc treatment. Internally, ticks uses a noinline pragma and a noinline version of to help reduce the chances of memoization, but this is an inexact science in the hands of he author, at least, so interpret with caution. The use of noinline interposes an extra function call, which can highly skew very fast computations.
>>>
let n = 1000
>>>
(cs, fa) <- ticks n f a
Baseline speed can be highly sensitive to the nature of the function trimmings. Polymorphic functions can tend to be slightly slower, and functions with lambda expressions can experience dramatic slowdowns.
fMono :: Int -> Int fMono x = foldl' (+) 0 [1 .. x] fPoly :: (Enum b, Num b, Additive b) => b -> b fPoly x = foldl' (+) 0 [1 .. x] fLambda :: Int -> Int fLambda = \x -> foldl' (+) 0 [1 .. x]
ticksIO :: NFData a => Int -> IO a -> IO ([Cycle], a) Source #
n measuremenst of a tickIO
returns an IO tuple; list of Cycles and the last evaluated f a
>>>
(cs, fa) <- ticksIO n (pure $ f a)