6 perf-stat - Run a command and gather performance counter statistics
11 'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
12 'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>]
13 'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>]
14 'perf stat' report [-i file]
18 This command runs a command and gathers performance counter statistics
25 Any command you can specify in a shell.
35 Select the PMU event. Selection can be:
37 - a symbolic event name (use 'perf list' to list all events)
39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
40 hexadecimal event descriptor.
42 - a symbolic or raw PMU event followed by an optional colon
43 and a list of event modifiers, e.g., cpu-cycles:p. See the
44 linkperf:perf-list[1] man page for details on event modifiers.
46 - a symbolically formed event like 'pmu/param1=0x3,param2/' where
47 param1 and param2 are defined as formats for the PMU in
48 /sys/bus/event_source/devices/<pmu>/format/*
50 'percore' is a event qualifier that sums up the event counts for both
51 hardware threads in a core. For example:
52 perf stat -A -a -e cpu/event,percore=1/,otherevent ...
54 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/'
55 where M, N, K are numbers (in decimal, hex, octal format).
56 Acceptable values for each of 'config', 'config1' and 'config2'
57 parameters are defined by corresponding entries in
58 /sys/bus/event_source/devices/<pmu>/format/*
60 Note that the last two syntaxes support prefix and glob matching in
61 the PMU name to simplify creation of events across multiple instances
62 of the same type of PMU in large systems (e.g. memory controller PMUs).
63 Multiple PMU instances are typical for uncore PMUs, so the prefix
64 'uncore_' is also ignored when performing this match.
69 child tasks do not inherit counters
72 stat events on existing process id (comma separated list)
76 stat events on existing thread id (comma separated list)
80 stat events on existing bpf program id (comma separated list),
81 requiring root rights. bpftool-prog could be used to find program
82 id all bpf programs in the system. For example:
84 # bpftool prog | head -n 1
85 17247: tracepoint name sys_enter tag 192d548b9d754067 gpl
87 # perf stat -e cycles,instructions --bpf-prog 17247 --timeout 1000
89 Performance counter stats for 'BPF program(s) 17247':
92 28,982 instructions # 0.34 insn per cycle
94 1.102235068 seconds time elapsed
98 Select a PMU event using libpfm4 syntax (see http://perfmon2.sf.net)
99 including support for event filters. For example '--pfm-events
100 inst_retired:any_p:u:c=1:i'. More than one event can be passed to the
101 option using the comma separator. Hardware events and generic hardware
102 events cannot be mixed together. The latter must be used with the -e
103 option. The -e option and this one can be mixed and matched. Events
104 can be grouped using the {} notation.
109 system-wide collection from all CPUs (default if no target is specified)
112 Don't scale/normalize counter values
116 print more detailed statistics, can be specified up to 3 times
118 -d: detailed events, L1 and LLC data cache
119 -d -d: more detailed events, dTLB and iTLB events
120 -d -d -d: very detailed events, adding prefetch events
124 repeat command and print average + stddev (max: 100). 0 means forever.
128 print large numbers with thousands' separators according to locale.
129 Enabled by default. Use "--no-big-num" to disable.
130 Default setting can be changed with "perf config stat.big-num=false".
134 Count only on the list of CPUs provided. Multiple CPUs can be provided as a
135 comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
136 In per-thread mode, this option is ignored. The -a option is still necessary
137 to activate system-wide monitoring. Default is to count on all CPUs.
141 Do not aggregate counts across all monitored CPUs.
145 null run - don't start any counters
149 be more verbose (show counter open errors, etc)
152 --field-separator SEP::
153 print counts using a CSV-style output to make it easy to import directly into
154 spreadsheets. Columns are separated by the string specified in SEP.
156 --table:: Display time for each run (-r option), in a table format, e.g.:
158 $ perf stat --null -r 5 --table perf bench sched pipe
160 Performance counter stats for 'perf bench sched pipe' (5 runs):
162 # Table of individual measurements:
170 5.483 +- 0.198 seconds time elapsed ( +- 3.62% )
174 monitor only in the container (cgroup) called "name". This option is available only
175 in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
176 container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
177 can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
178 to first event, second cgroup to second event and so on. It is possible to provide
179 an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
180 corresponding events, i.e., they always refer to events defined earlier on the command
181 line. If the user wants to track multiple events for a specific cgroup, the user can
182 use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
184 If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
185 command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
187 --for-each-cgroup name::
188 Expand event list for each cgroup in "name" (allow multiple cgroups separated
189 by comma). It also support regex patterns to match multiple groups. This has same
190 effect that repeating -e option and -G option for each event x name. This option
191 cannot be used with -G/--cgroup option.
195 Print the output into the designated file.
198 Append to the output file designated with the -o option. Ignored if -o is not specified.
202 Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive
203 with it. --append may be used here. Examples:
204 3>results perf stat --log-fd 3 -- $cmd
205 3>>results perf stat --log-fd 3 --append -- $cmd
207 --control=fifo:ctl-fifo[,ack-fifo]::
208 --control=fd:ctl-fd[,ack-fd]::
209 ctl-fifo / ack-fifo are opened and used as ctl-fd / ack-fd as follows.
210 Listen on ctl-fd descriptor for command to control measurement ('enable': enable events,
211 'disable': disable events). Measurements can be started with events disabled using
212 --delay=-1 option. Optionally send control command completion ('ack\n') to ack-fd descriptor
213 to synchronize with the controlling process. Example of bash shell script to enable and
214 disable events during measurements:
220 ctl_fifo=${ctl_dir}perf_ctl.fifo
221 test -p ${ctl_fifo} && unlink ${ctl_fifo}
223 exec {ctl_fd}<>${ctl_fifo}
225 ctl_ack_fifo=${ctl_dir}perf_ctl_ack.fifo
226 test -p ${ctl_ack_fifo} && unlink ${ctl_ack_fifo}
227 mkfifo ${ctl_ack_fifo}
228 exec {ctl_fd_ack}<>${ctl_ack_fifo}
230 perf stat -D -1 -e cpu-cycles -a -I 1000 \
231 --control fd:${ctl_fd},${ctl_fd_ack} \
235 sleep 5 && echo 'enable' >&${ctl_fd} && read -u ${ctl_fd_ack} e1 && echo "enabled(${e1})"
236 sleep 10 && echo 'disable' >&${ctl_fd} && read -u ${ctl_fd_ack} d1 && echo "disabled(${d1})"
239 unlink ${ctl_ack_fifo}
250 Pre and post measurement hooks, e.g.:
252 perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage
255 --interval-print msecs::
256 Print count deltas every N milliseconds (minimum: 1ms)
257 The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
258 example: 'perf stat -I 1000 -e cycles -a sleep 5'
260 If the metric exists, it is calculated by the counts generated in this interval and the metric is printed after #.
262 --interval-count times::
263 Print count deltas for fixed number of times.
264 This option should be used together with "-I" option.
265 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a'
268 Clear the screen before next interval.
271 Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms).
272 This option is not supported with the "-I" option.
273 example: 'perf stat --time 2000 -e cycles -a'
276 Only print computed metrics. Print them in a single line.
277 Don't show any raw values. Not supported with --per-thread.
280 Aggregate counts per processor socket for system-wide mode measurements. This
281 is a useful mode to detect imbalance between sockets. To enable this mode,
282 use --per-socket in addition to -a. (system-wide). The output includes the
283 socket number and the number of online processors on that socket. This is
284 useful to gauge the amount of aggregation.
287 Aggregate counts per processor die for system-wide mode measurements. This
288 is a useful mode to detect imbalance between dies. To enable this mode,
289 use --per-die in addition to -a. (system-wide). The output includes the
290 die number and the number of online processors on that die. This is
291 useful to gauge the amount of aggregation.
294 Aggregate counts per physical processor for system-wide mode measurements. This
295 is a useful mode to detect imbalance between physical cores. To enable this mode,
296 use --per-core in addition to -a. (system-wide). The output includes the
297 core number and the number of online logical processors on that physical processor.
300 Aggregate counts per monitored threads, when monitoring threads (-t option)
301 or processes (-p option).
304 Aggregate counts per NUMA nodes for system-wide mode measurements. This
305 is a useful mode to detect imbalance between NUMA nodes. To enable this
306 mode, use --per-node in addition to -a. (system-wide).
310 After starting the program, wait msecs before measuring (-1: start with events
311 disabled). This is useful to filter out the startup phase of the program,
312 which is often very different.
317 Print statistics of transactional execution if supported.
320 By default, events to compute a metric are placed in weak groups. The
321 group tries to enforce scheduling all or none of the events. The
322 --metric-no-group option places events outside of groups and may
323 increase the chance of the event being scheduled - leading to more
324 accuracy. However, as events may not be scheduled together accuracy
325 for metrics like instructions per cycle can be lower - as both metrics
326 may no longer be being measured at the same time.
329 By default metric events in different weak groups can be shared if one
330 group contains all the events needed by another. In such cases one
331 group will be eliminated reducing event multiplexing and making it so
332 that certain groups of metrics sum to 100%. A downside to sharing a
333 group is that the group may require multiplexing and so accuracy for a
334 small group that need not have multiplexing is lowered. This option
335 forbids the event merging logic from sharing events between groups and
336 may be used to increase accuracy in this case.
339 Don't print output. This is useful with perf stat record below to only
340 write data to the perf.data file.
344 Stores stat data into perf data file.
352 Reads and reports stat data from perf data file.
359 Aggregate counts per processor socket for system-wide mode measurements.
362 Aggregate counts per processor die for system-wide mode measurements.
365 Aggregate counts per physical processor for system-wide mode measurements.
369 Print metrics or metricgroups specified in a comma separated list.
370 For a group all metrics from the group are added.
371 The events from the metrics are automatically measured.
372 See perf list output for the possble metrics and metricgroups.
376 Do not aggregate counts across all monitored CPUs.
379 Print top down level 1 metrics if supported by the CPU. This allows to
380 determine bottle necks in the CPU pipeline for CPU bound workloads,
381 by breaking the cycles consumed down into frontend bound, backend bound,
382 bad speculation and retiring.
384 Frontend bound means that the CPU cannot fetch and decode instructions fast
385 enough. Backend bound means that computation or memory access is the bottle
386 neck. Bad Speculation means that the CPU wasted cycles due to branch
387 mispredictions and similar issues. Retiring means that the CPU computed without
388 an apparently bottleneck. The bottleneck is only the real bottleneck
389 if the workload is actually bound by the CPU and not by something else.
391 For best results it is usually a good idea to use it with interval
392 mode like -I 1000, as the bottleneck of workloads can change often.
394 This enables --metric-only, unless overridden with --no-metric-only.
396 The following restrictions only apply to older Intel CPUs and Atom,
397 on newer CPUs (IceLake and later) TopDown can be collected for any thread:
399 The top down metrics are collected per core instead of per
400 CPU thread. Per core mode is automatically enabled
401 and -a (global monitoring) is needed, requiring root rights or
402 perf.perf_event_paranoid=-1.
404 Topdown uses the full Performance Monitoring Unit, and needs
405 disabling of the NMI watchdog (as root):
406 echo 0 > /proc/sys/kernel/nmi_watchdog
407 for best results. Otherwise the bottlenecks may be inconsistent
408 on workload with changing phases.
410 To interpret the results it is usually needed to know on which
411 CPUs the workload runs on. If needed the CPUs can be forced using
415 Do not merge results from same PMUs.
417 When multiple events are created from a single event specification,
418 stat will, by default, aggregate the event counts and show the result
419 in a single row. This option disables that behavior and shows
420 the individual events and counts.
422 Multiple events are created from a single event specification when:
423 1. Prefix or glob matching is used for the PMU name.
424 2. Aliases, which are listed immediately after the Kernel PMU events
425 by perf list, are used.
428 Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
430 During the measurement, the /sys/device/cpu/freeze_on_smi will be set to
431 freeze core counters on SMI.
432 The aperf counter will not be effected by the setting.
433 The cost of SMI can be measured by (aperf - unhalted core cycles).
435 In practice, the percentages of SMI cycles is very useful for performance
436 oriented analysis. --metric_only will be applied by default.
437 The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf
439 Users who wants to get the actual value can apply --no-metric-only.
442 Configure all used events to run in kernel space.
445 Configure all used events to run in user space.
447 --percore-show-thread::
448 The event modifier "percore" has supported to sum up the event counts
449 for all hardware threads in a core and show the counts per core.
451 This option with event modifier "percore" enabled also sums up the event
452 counts for all hardware threads in a core but show the sum counts per
453 hardware thread. This is essentially a replacement for the any bit and
454 convenient for post processing.
457 Print summary for interval mode (-I).
464 Performance counter stats for 'make':
466 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
467 0 context-switches:u # 0.000 K/sec
468 0 cpu-migrations:u # 0.000 K/sec
469 3,228,188 page-faults:u # 0.039 M/sec
470 229,570,665,834 cycles:u # 2.742 GHz
471 313,163,853,778 instructions:u # 1.36 insn per cycle
472 69,704,684,856 branches:u # 832.559 M/sec
473 2,078,861,393 branch-misses:u # 2.98% of all branches
475 83.409183620 seconds time elapsed
477 74.684747000 seconds user
478 8.739217000 seconds sys
482 As displayed in the example above we can display 3 types of timings.
483 We always display the time the counters were enabled/alive:
485 83.409183620 seconds time elapsed
487 For workload sessions we also display time the workloads spent in
490 74.684747000 seconds user
491 8.739217000 seconds sys
493 Those times are the very same as displayed by the 'time' tool.
498 With -x, perf stat is able to output a not-quite-CSV format output
499 Commas in the output are not put into "". To make it easy to parse
500 it is recommended to use a different character like -x \;
502 The fields are in this order:
504 - optional usec time stamp in fractions of second (with -I xxx)
505 - optional CPU, core, or socket identifier
506 - optional number of logical CPUs aggregated
508 - unit of the counter value or empty
510 - run time of counter
511 - percentage of measurement time the counter was running
512 - optional variance if multiple values are collected with -r
513 - optional metric value
514 - optional unit of metric
516 Additional metrics may be printed with all earlier fields being empty.
520 linkperf:perf-top[1], linkperf:perf-list[1]