6 perf-stat - Run a command and gather performance counter statistics
11 'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
12 'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>]
13 'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>]
14 'perf stat' report [-i file]
18 This command runs a command and gathers performance counter statistics
25 Any command you can specify in a shell.
35 Select the PMU event. Selection can be:
37 - a symbolic event name (use 'perf list' to list all events)
39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
40 hexadecimal event descriptor.
42 - a symbolic or raw PMU event followed by an optional colon
43 and a list of event modifiers, e.g., cpu-cycles:p. See the
44 linkperf:perf-list[1] man page for details on event modifiers.
46 - a symbolically formed event like 'pmu/param1=0x3,param2/' where
47 param1 and param2 are defined as formats for the PMU in
48 /sys/bus/event_source/devices/<pmu>/format/*
50 'percore' is a event qualifier that sums up the event counts for both
51 hardware threads in a core. For example:
52 perf stat -A -a -e cpu/event,percore=1/,otherevent ...
54 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/'
55 where M, N, K are numbers (in decimal, hex, octal format).
56 Acceptable values for each of 'config', 'config1' and 'config2'
57 parameters are defined by corresponding entries in
58 /sys/bus/event_source/devices/<pmu>/format/*
60 Note that the last two syntaxes support prefix and glob matching in
61 the PMU name to simplify creation of events across multiple instances
62 of the same type of PMU in large systems (e.g. memory controller PMUs).
63 Multiple PMU instances are typical for uncore PMUs, so the prefix
64 'uncore_' is also ignored when performing this match.
69 child tasks do not inherit counters
72 stat events on existing process id (comma separated list)
76 stat events on existing thread id (comma separated list)
80 stat events on existing bpf program id (comma separated list),
81 requiring root rights. bpftool-prog could be used to find program
82 id all bpf programs in the system. For example:
84 # bpftool prog | head -n 1
85 17247: tracepoint name sys_enter tag 192d548b9d754067 gpl
87 # perf stat -e cycles,instructions --bpf-prog 17247 --timeout 1000
89 Performance counter stats for 'BPF program(s) 17247':
92 28,982 instructions # 0.34 insn per cycle
94 1.102235068 seconds time elapsed
97 Use BPF programs to aggregate readings from perf_events. This
98 allows multiple perf-stat sessions that are counting the same metric (cycles,
99 instructions, etc.) to share hardware counters.
100 To use BPF programs on common events by default, use
101 "perf config stat.bpf-counter-events=<list_of_events>".
104 With option "--bpf-counters", different perf-stat sessions share
105 information about shared BPF programs and maps via a pinned hashmap.
106 Use "--bpf-attr-map" to specify the path of this pinned hashmap.
107 The default path is /sys/fs/bpf/perf_attr_map.
110 --pfm-events events::
111 Select a PMU event using libpfm4 syntax (see http://perfmon2.sf.net)
112 including support for event filters. For example '--pfm-events
113 inst_retired:any_p:u:c=1:i'. More than one event can be passed to the
114 option using the comma separator. Hardware events and generic hardware
115 events cannot be mixed together. The latter must be used with the -e
116 option. The -e option and this one can be mixed and matched. Events
117 can be grouped using the {} notation.
122 system-wide collection from all CPUs (default if no target is specified)
125 Don't scale/normalize counter values
129 print more detailed statistics, can be specified up to 3 times
131 -d: detailed events, L1 and LLC data cache
132 -d -d: more detailed events, dTLB and iTLB events
133 -d -d -d: very detailed events, adding prefetch events
137 repeat command and print average + stddev (max: 100). 0 means forever.
141 print large numbers with thousands' separators according to locale.
142 Enabled by default. Use "--no-big-num" to disable.
143 Default setting can be changed with "perf config stat.big-num=false".
147 Count only on the list of CPUs provided. Multiple CPUs can be provided as a
148 comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
149 In per-thread mode, this option is ignored. The -a option is still necessary
150 to activate system-wide monitoring. Default is to count on all CPUs.
154 Do not aggregate counts across all monitored CPUs.
158 null run - Don't start any counters.
160 This can be useful to measure just elapsed wall-clock time - or to assess the
161 raw overhead of perf stat itself, without running any counters.
165 be more verbose (show counter open errors, etc)
168 --field-separator SEP::
169 print counts using a CSV-style output to make it easy to import directly into
170 spreadsheets. Columns are separated by the string specified in SEP.
172 --table:: Display time for each run (-r option), in a table format, e.g.:
174 $ perf stat --null -r 5 --table perf bench sched pipe
176 Performance counter stats for 'perf bench sched pipe' (5 runs):
178 # Table of individual measurements:
186 5.483 +- 0.198 seconds time elapsed ( +- 3.62% )
190 monitor only in the container (cgroup) called "name". This option is available only
191 in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
192 container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
193 can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
194 to first event, second cgroup to second event and so on. It is possible to provide
195 an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
196 corresponding events, i.e., they always refer to events defined earlier on the command
197 line. If the user wants to track multiple events for a specific cgroup, the user can
198 use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
200 If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
201 command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
203 --for-each-cgroup name::
204 Expand event list for each cgroup in "name" (allow multiple cgroups separated
205 by comma). It also support regex patterns to match multiple groups. This has same
206 effect that repeating -e option and -G option for each event x name. This option
207 cannot be used with -G/--cgroup option.
211 Print the output into the designated file.
214 Append to the output file designated with the -o option. Ignored if -o is not specified.
218 Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive
219 with it. --append may be used here. Examples:
220 3>results perf stat --log-fd 3 -- $cmd
221 3>>results perf stat --log-fd 3 --append -- $cmd
223 --control=fifo:ctl-fifo[,ack-fifo]::
224 --control=fd:ctl-fd[,ack-fd]::
225 ctl-fifo / ack-fifo are opened and used as ctl-fd / ack-fd as follows.
226 Listen on ctl-fd descriptor for command to control measurement ('enable': enable events,
227 'disable': disable events). Measurements can be started with events disabled using
228 --delay=-1 option. Optionally send control command completion ('ack\n') to ack-fd descriptor
229 to synchronize with the controlling process. Example of bash shell script to enable and
230 disable events during measurements:
236 ctl_fifo=${ctl_dir}perf_ctl.fifo
237 test -p ${ctl_fifo} && unlink ${ctl_fifo}
239 exec {ctl_fd}<>${ctl_fifo}
241 ctl_ack_fifo=${ctl_dir}perf_ctl_ack.fifo
242 test -p ${ctl_ack_fifo} && unlink ${ctl_ack_fifo}
243 mkfifo ${ctl_ack_fifo}
244 exec {ctl_fd_ack}<>${ctl_ack_fifo}
246 perf stat -D -1 -e cpu-cycles -a -I 1000 \
247 --control fd:${ctl_fd},${ctl_fd_ack} \
251 sleep 5 && echo 'enable' >&${ctl_fd} && read -u ${ctl_fd_ack} e1 && echo "enabled(${e1})"
252 sleep 10 && echo 'disable' >&${ctl_fd} && read -u ${ctl_fd_ack} d1 && echo "disabled(${d1})"
255 unlink ${ctl_ack_fifo}
266 Pre and post measurement hooks, e.g.:
268 perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage
271 --interval-print msecs::
272 Print count deltas every N milliseconds (minimum: 1ms)
273 The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
274 example: 'perf stat -I 1000 -e cycles -a sleep 5'
276 If the metric exists, it is calculated by the counts generated in this interval and the metric is printed after #.
278 --interval-count times::
279 Print count deltas for fixed number of times.
280 This option should be used together with "-I" option.
281 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a'
284 Clear the screen before next interval.
287 Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms).
288 This option is not supported with the "-I" option.
289 example: 'perf stat --time 2000 -e cycles -a'
292 Only print computed metrics. Print them in a single line.
293 Don't show any raw values. Not supported with --per-thread.
296 Aggregate counts per processor socket for system-wide mode measurements. This
297 is a useful mode to detect imbalance between sockets. To enable this mode,
298 use --per-socket in addition to -a. (system-wide). The output includes the
299 socket number and the number of online processors on that socket. This is
300 useful to gauge the amount of aggregation.
303 Aggregate counts per processor die for system-wide mode measurements. This
304 is a useful mode to detect imbalance between dies. To enable this mode,
305 use --per-die in addition to -a. (system-wide). The output includes the
306 die number and the number of online processors on that die. This is
307 useful to gauge the amount of aggregation.
310 Aggregate counts per physical processor for system-wide mode measurements. This
311 is a useful mode to detect imbalance between physical cores. To enable this mode,
312 use --per-core in addition to -a. (system-wide). The output includes the
313 core number and the number of online logical processors on that physical processor.
316 Aggregate counts per monitored threads, when monitoring threads (-t option)
317 or processes (-p option).
320 Aggregate counts per NUMA nodes for system-wide mode measurements. This
321 is a useful mode to detect imbalance between NUMA nodes. To enable this
322 mode, use --per-node in addition to -a. (system-wide).
326 After starting the program, wait msecs before measuring (-1: start with events
327 disabled). This is useful to filter out the startup phase of the program,
328 which is often very different.
333 Print statistics of transactional execution if supported.
336 By default, events to compute a metric are placed in weak groups. The
337 group tries to enforce scheduling all or none of the events. The
338 --metric-no-group option places events outside of groups and may
339 increase the chance of the event being scheduled - leading to more
340 accuracy. However, as events may not be scheduled together accuracy
341 for metrics like instructions per cycle can be lower - as both metrics
342 may no longer be being measured at the same time.
345 By default metric events in different weak groups can be shared if one
346 group contains all the events needed by another. In such cases one
347 group will be eliminated reducing event multiplexing and making it so
348 that certain groups of metrics sum to 100%. A downside to sharing a
349 group is that the group may require multiplexing and so accuracy for a
350 small group that need not have multiplexing is lowered. This option
351 forbids the event merging logic from sharing events between groups and
352 may be used to increase accuracy in this case.
355 Don't print output. This is useful with perf stat record below to only
356 write data to the perf.data file.
360 Stores stat data into perf data file.
368 Reads and reports stat data from perf data file.
375 Aggregate counts per processor socket for system-wide mode measurements.
378 Aggregate counts per processor die for system-wide mode measurements.
381 Aggregate counts per physical processor for system-wide mode measurements.
385 Print metrics or metricgroups specified in a comma separated list.
386 For a group all metrics from the group are added.
387 The events from the metrics are automatically measured.
388 See perf list output for the possble metrics and metricgroups.
392 Do not aggregate counts across all monitored CPUs.
395 Print complete top-down metrics supported by the CPU. This allows to
396 determine bottle necks in the CPU pipeline for CPU bound workloads,
397 by breaking the cycles consumed down into frontend bound, backend bound,
398 bad speculation and retiring.
400 Frontend bound means that the CPU cannot fetch and decode instructions fast
401 enough. Backend bound means that computation or memory access is the bottle
402 neck. Bad Speculation means that the CPU wasted cycles due to branch
403 mispredictions and similar issues. Retiring means that the CPU computed without
404 an apparently bottleneck. The bottleneck is only the real bottleneck
405 if the workload is actually bound by the CPU and not by something else.
407 For best results it is usually a good idea to use it with interval
408 mode like -I 1000, as the bottleneck of workloads can change often.
410 This enables --metric-only, unless overridden with --no-metric-only.
412 The following restrictions only apply to older Intel CPUs and Atom,
413 on newer CPUs (IceLake and later) TopDown can be collected for any thread:
415 The top down metrics are collected per core instead of per
416 CPU thread. Per core mode is automatically enabled
417 and -a (global monitoring) is needed, requiring root rights or
418 perf.perf_event_paranoid=-1.
420 Topdown uses the full Performance Monitoring Unit, and needs
421 disabling of the NMI watchdog (as root):
422 echo 0 > /proc/sys/kernel/nmi_watchdog
423 for best results. Otherwise the bottlenecks may be inconsistent
424 on workload with changing phases.
426 To interpret the results it is usually needed to know on which
427 CPUs the workload runs on. If needed the CPUs can be forced using
431 Print the top-down statistics that equal to or lower than the input level.
432 It allows users to print the interested top-down metrics level instead of
433 the complete top-down metrics.
435 The availability of the top-down metrics level depends on the hardware. For
436 example, Ice Lake only supports L1 top-down metrics. The Sapphire Rapids
437 supports both L1 and L2 top-down metrics.
439 Default: 0 means the max level that the current hardware support.
440 Error out if the input is higher than the supported max level.
443 Do not merge results from same PMUs.
445 When multiple events are created from a single event specification,
446 stat will, by default, aggregate the event counts and show the result
447 in a single row. This option disables that behavior and shows
448 the individual events and counts.
450 Multiple events are created from a single event specification when:
451 1. Prefix or glob matching is used for the PMU name.
452 2. Aliases, which are listed immediately after the Kernel PMU events
453 by perf list, are used.
456 Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
458 During the measurement, the /sys/device/cpu/freeze_on_smi will be set to
459 freeze core counters on SMI.
460 The aperf counter will not be effected by the setting.
461 The cost of SMI can be measured by (aperf - unhalted core cycles).
463 In practice, the percentages of SMI cycles is very useful for performance
464 oriented analysis. --metric_only will be applied by default.
465 The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf
467 Users who wants to get the actual value can apply --no-metric-only.
470 Configure all used events to run in kernel space.
473 Configure all used events to run in user space.
475 --percore-show-thread::
476 The event modifier "percore" has supported to sum up the event counts
477 for all hardware threads in a core and show the counts per core.
479 This option with event modifier "percore" enabled also sums up the event
480 counts for all hardware threads in a core but show the sum counts per
481 hardware thread. This is essentially a replacement for the any bit and
482 convenient for post processing.
485 Print summary for interval mode (-I).
488 Don't print 'summary' at the first column for CVS summary output.
489 This option must be used with -x and --summary.
491 This option can be enabled in perf config by setting the variable
492 'stat.no-csv-summary'.
494 $ perf config stat.no-csv-summary=true
501 Performance counter stats for 'make':
503 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
504 0 context-switches:u # 0.000 K/sec
505 0 cpu-migrations:u # 0.000 K/sec
506 3,228,188 page-faults:u # 0.039 M/sec
507 229,570,665,834 cycles:u # 2.742 GHz
508 313,163,853,778 instructions:u # 1.36 insn per cycle
509 69,704,684,856 branches:u # 832.559 M/sec
510 2,078,861,393 branch-misses:u # 2.98% of all branches
512 83.409183620 seconds time elapsed
514 74.684747000 seconds user
515 8.739217000 seconds sys
519 As displayed in the example above we can display 3 types of timings.
520 We always display the time the counters were enabled/alive:
522 83.409183620 seconds time elapsed
524 For workload sessions we also display time the workloads spent in
527 74.684747000 seconds user
528 8.739217000 seconds sys
530 Those times are the very same as displayed by the 'time' tool.
535 With -x, perf stat is able to output a not-quite-CSV format output
536 Commas in the output are not put into "". To make it easy to parse
537 it is recommended to use a different character like -x \;
539 The fields are in this order:
541 - optional usec time stamp in fractions of second (with -I xxx)
542 - optional CPU, core, or socket identifier
543 - optional number of logical CPUs aggregated
545 - unit of the counter value or empty
547 - run time of counter
548 - percentage of measurement time the counter was running
549 - optional variance if multiple values are collected with -r
550 - optional metric value
551 - optional unit of metric
553 Additional metrics may be printed with all earlier fields being empty.
555 include::intel-hybrid.txt[]
559 linkperf:perf-top[1], linkperf:perf-list[1]