8 :Author: Tejun Heo <tj@kernel.org>
10 This is the authoritative documentation on the design, interface and
11 conventions of cgroup v2. It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
13 future changes must be reflected in this document. Documentation for
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
23 2-2. Organizing Processes and Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
45 4-3. Core Interface Files
48 5-1-1. CPU Interface Files
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
54 5-3-1. IO Interface Files
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
61 5-4-1. PID Interface Files
63 5.5-1. Cpuset Interface Files
66 5-7-1. RDMA Interface Files
68 5.8-1. HugeTLB Interface Files
70 5.9-1 Miscellaneous cgroup Interface Files
71 5.9-2 Migration and Ownership
74 5-N. Non-normative information
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
79 6-2. The Root and Views
80 6-3. Migration and setns(2)
81 6-4. Interaction with Other Namespaces
82 P. Information on Kernel Programming
83 P-1. Filesystem Support for Writeback
84 D. Deprecated v1 Core Features
85 R. Issues with v1 and Rationales for v2
86 R-1. Multiple Hierarchies
87 R-2. Thread Granularity
88 R-3. Competition Between Inner Nodes and Threads
89 R-4. Other Interface Issues
90 R-5. Controller Issues and Remedies
100 "cgroup" stands for "control group" and is never capitalized. The
101 singular form is used to designate the whole feature and also as a
102 qualifier as in "cgroup controllers". When explicitly referring to
103 multiple individual control groups, the plural form "cgroups" is used.
109 cgroup is a mechanism to organize processes hierarchically and
110 distribute system resources along the hierarchy in a controlled and
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes. A cgroup controller is usually responsible for
116 distributing a specific type of system resource along the hierarchy
117 although there are utility controllers which serve purposes other than
118 resource distribution.
120 cgroups form a tree structure and every process in the system belongs
121 to one and only one cgroup. All threads of a process belong to the
122 same cgroup. On creation, all processes are put in the cgroup that
123 the parent process belongs to at the time. A process can be migrated
124 to another cgroup. Migration of a process doesn't affect already
125 existing descendant processes.
127 Following certain structural constraints, controllers may be enabled or
128 disabled selectively on a cgroup. All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
130 processes which belong to the cgroups consisting the inclusive
131 sub-hierarchy of the cgroup. When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further. The
133 restrictions set closer to the root in the hierarchy can not be
134 overridden from further away.
143 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
144 hierarchy can be mounted with the following mount command::
146 # mount -t cgroup2 none $MOUNT_POINT
148 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
149 controllers which support v2 and are not bound to a v1 hierarchy are
150 automatically bound to the v2 hierarchy and show up at the root.
151 Controllers which are not in active use in the v2 hierarchy can be
152 bound to other hierarchies. This allows mixing v2 hierarchy with the
153 legacy v1 multiple hierarchies in a fully backward compatible way.
155 A controller can be moved across hierarchies only after the controller
156 is no longer referenced in its current hierarchy. Because per-cgroup
157 controller states are destroyed asynchronously and controllers may
158 have lingering references, a controller may not show up immediately on
159 the v2 hierarchy after the final umount of the previous hierarchy.
160 Similarly, a controller should be fully disabled to be moved out of
161 the unified hierarchy and it may take some time for the disabled
162 controller to become available for other hierarchies; furthermore, due
163 to inter-controller dependencies, other controllers may need to be
166 While useful for development and manual configurations, moving
167 controllers dynamically between the v2 and other hierarchies is
168 strongly discouraged for production use. It is recommended to decide
169 the hierarchies and controller associations before starting using the
170 controllers after system boot.
172 During transition to v2, system management software might still
173 automount the v1 cgroup filesystem and so hijack all controllers
174 during boot, before manual intervention is possible. To make testing
175 and experimenting easier, the kernel parameter cgroup_no_v1= allows
176 disabling controllers in v1 and make them always available in v2.
178 cgroup v2 currently supports the following mount options.
181 Consider cgroup namespaces as delegation boundaries. This
182 option is system wide and can only be set on mount or modified
183 through remount from the init namespace. The mount option is
184 ignored on non-init namespace mounts. Please refer to the
185 Delegation section for details.
188 Reduce the latencies of dynamic cgroup modifications such as
189 task migrations and controller on/offs at the cost of making
190 hot path operations such as forks and exits more expensive.
191 The static usage pattern of creating a cgroup, enabling
192 controllers, and then seeding it with CLONE_INTO_CGROUP is
193 not affected by this option.
196 Only populate memory.events with data for the current cgroup,
197 and not any subtrees. This is legacy behaviour, the default
198 behaviour without this option is to include subtree counts.
199 This option is system wide and can only be set on mount or
200 modified through remount from the init namespace. The mount
201 option is ignored on non-init namespace mounts.
204 Recursively apply memory.min and memory.low protection to
205 entire subtrees, without requiring explicit downward
206 propagation into leaf cgroups. This allows protecting entire
207 subtrees from one another, while retaining free competition
208 within those subtrees. This should have been the default
209 behavior but is a mount-option to avoid regressing setups
210 relying on the original semantics (e.g. specifying bogusly
211 high 'bypass' protection values at higher tree levels).
214 Organizing Processes and Threads
215 --------------------------------
220 Initially, only the root cgroup exists to which all processes belong.
221 A child cgroup can be created by creating a sub-directory::
225 A given cgroup may have multiple child cgroups forming a tree
226 structure. Each cgroup has a read-writable interface file
227 "cgroup.procs". When read, it lists the PIDs of all processes which
228 belong to the cgroup one-per-line. The PIDs are not ordered and the
229 same PID may show up more than once if the process got moved to
230 another cgroup and then back or the PID got recycled while reading.
232 A process can be migrated into a cgroup by writing its PID to the
233 target cgroup's "cgroup.procs" file. Only one process can be migrated
234 on a single write(2) call. If a process is composed of multiple
235 threads, writing the PID of any thread migrates all threads of the
238 When a process forks a child process, the new process is born into the
239 cgroup that the forking process belongs to at the time of the
240 operation. After exit, a process stays associated with the cgroup
241 that it belonged to at the time of exit until it's reaped; however, a
242 zombie process does not appear in "cgroup.procs" and thus can't be
243 moved to another cgroup.
245 A cgroup which doesn't have any children or live processes can be
246 destroyed by removing the directory. Note that a cgroup which doesn't
247 have any children and is associated only with zombie processes is
248 considered empty and can be removed::
252 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
253 cgroup is in use in the system, this file may contain multiple lines,
254 one for each hierarchy. The entry for cgroup v2 is always in the
257 # cat /proc/842/cgroup
259 0::/test-cgroup/test-cgroup-nested
261 If the process becomes a zombie and the cgroup it was associated with
262 is removed subsequently, " (deleted)" is appended to the path::
264 # cat /proc/842/cgroup
266 0::/test-cgroup/test-cgroup-nested (deleted)
272 cgroup v2 supports thread granularity for a subset of controllers to
273 support use cases requiring hierarchical resource distribution across
274 the threads of a group of processes. By default, all threads of a
275 process belong to the same cgroup, which also serves as the resource
276 domain to host resource consumptions which are not specific to a
277 process or thread. The thread mode allows threads to be spread across
278 a subtree while still maintaining the common resource domain for them.
280 Controllers which support thread mode are called threaded controllers.
281 The ones which don't are called domain controllers.
283 Marking a cgroup threaded makes it join the resource domain of its
284 parent as a threaded cgroup. The parent may be another threaded
285 cgroup whose resource domain is further up in the hierarchy. The root
286 of a threaded subtree, that is, the nearest ancestor which is not
287 threaded, is called threaded domain or thread root interchangeably and
288 serves as the resource domain for the entire subtree.
290 Inside a threaded subtree, threads of a process can be put in
291 different cgroups and are not subject to the no internal process
292 constraint - threaded controllers can be enabled on non-leaf cgroups
293 whether they have threads in them or not.
295 As the threaded domain cgroup hosts all the domain resource
296 consumptions of the subtree, it is considered to have internal
297 resource consumptions whether there are processes in it or not and
298 can't have populated child cgroups which aren't threaded. Because the
299 root cgroup is not subject to no internal process constraint, it can
300 serve both as a threaded domain and a parent to domain cgroups.
302 The current operation mode or type of the cgroup is shown in the
303 "cgroup.type" file which indicates whether the cgroup is a normal
304 domain, a domain which is serving as the domain of a threaded subtree,
305 or a threaded cgroup.
307 On creation, a cgroup is always a domain cgroup and can be made
308 threaded by writing "threaded" to the "cgroup.type" file. The
309 operation is single direction::
311 # echo threaded > cgroup.type
313 Once threaded, the cgroup can't be made a domain again. To enable the
314 thread mode, the following conditions must be met.
316 - As the cgroup will join the parent's resource domain. The parent
317 must either be a valid (threaded) domain or a threaded cgroup.
319 - When the parent is an unthreaded domain, it must not have any domain
320 controllers enabled or populated domain children. The root is
321 exempt from this requirement.
323 Topology-wise, a cgroup can be in an invalid state. Please consider
324 the following topology::
326 A (threaded domain) - B (threaded) - C (domain, just created)
328 C is created as a domain but isn't connected to a parent which can
329 host child domains. C can't be used until it is turned into a
330 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
331 these cases. Operations which fail due to invalid topology use
332 EOPNOTSUPP as the errno.
334 A domain cgroup is turned into a threaded domain when one of its child
335 cgroup becomes threaded or threaded controllers are enabled in the
336 "cgroup.subtree_control" file while there are processes in the cgroup.
337 A threaded domain reverts to a normal domain when the conditions
340 When read, "cgroup.threads" contains the list of the thread IDs of all
341 threads in the cgroup. Except that the operations are per-thread
342 instead of per-process, "cgroup.threads" has the same format and
343 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
344 written to in any cgroup, as it can only move threads inside the same
345 threaded domain, its operations are confined inside each threaded
348 The threaded domain cgroup serves as the resource domain for the whole
349 subtree, and, while the threads can be scattered across the subtree,
350 all the processes are considered to be in the threaded domain cgroup.
351 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
352 processes in the subtree and is not readable in the subtree proper.
353 However, "cgroup.procs" can be written to from anywhere in the subtree
354 to migrate all threads of the matching process to the cgroup.
356 Only threaded controllers can be enabled in a threaded subtree. When
357 a threaded controller is enabled inside a threaded subtree, it only
358 accounts for and controls resource consumptions associated with the
359 threads in the cgroup and its descendants. All consumptions which
360 aren't tied to a specific thread belong to the threaded domain cgroup.
362 Because a threaded subtree is exempt from no internal process
363 constraint, a threaded controller must be able to handle competition
364 between threads in a non-leaf cgroup and its child cgroups. Each
365 threaded controller defines how such competitions are handled.
367 Currently, the following controllers are threaded and can be enabled
368 in a threaded cgroup::
375 [Un]populated Notification
376 --------------------------
378 Each non-root cgroup has a "cgroup.events" file which contains
379 "populated" field indicating whether the cgroup's sub-hierarchy has
380 live processes in it. Its value is 0 if there is no live process in
381 the cgroup and its descendants; otherwise, 1. poll and [id]notify
382 events are triggered when the value changes. This can be used, for
383 example, to start a clean-up operation after all processes of a given
384 sub-hierarchy have exited. The populated state updates and
385 notifications are recursive. Consider the following sub-hierarchy
386 where the numbers in the parentheses represent the numbers of processes
392 A, B and C's "populated" fields would be 1 while D's 0. After the one
393 process in C exits, B and C's "populated" fields would flip to "0" and
394 file modified events will be generated on the "cgroup.events" files of
398 Controlling Controllers
399 -----------------------
401 Enabling and Disabling
402 ~~~~~~~~~~~~~~~~~~~~~~
404 Each cgroup has a "cgroup.controllers" file which lists all
405 controllers available for the cgroup to enable::
407 # cat cgroup.controllers
410 No controller is enabled by default. Controllers can be enabled and
411 disabled by writing to the "cgroup.subtree_control" file::
413 # echo "+cpu +memory -io" > cgroup.subtree_control
415 Only controllers which are listed in "cgroup.controllers" can be
416 enabled. When multiple operations are specified as above, either they
417 all succeed or fail. If multiple operations on the same controller
418 are specified, the last one is effective.
420 Enabling a controller in a cgroup indicates that the distribution of
421 the target resource across its immediate children will be controlled.
422 Consider the following sub-hierarchy. The enabled controllers are
423 listed in parentheses::
425 A(cpu,memory) - B(memory) - C()
428 As A has "cpu" and "memory" enabled, A will control the distribution
429 of CPU cycles and memory to its children, in this case, B. As B has
430 "memory" enabled but not "CPU", C and D will compete freely on CPU
431 cycles but their division of memory available to B will be controlled.
433 As a controller regulates the distribution of the target resource to
434 the cgroup's children, enabling it creates the controller's interface
435 files in the child cgroups. In the above example, enabling "cpu" on B
436 would create the "cpu." prefixed controller interface files in C and
437 D. Likewise, disabling "memory" from B would remove the "memory."
438 prefixed controller interface files from C and D. This means that the
439 controller interface files - anything which doesn't start with
440 "cgroup." are owned by the parent rather than the cgroup itself.
446 Resources are distributed top-down and a cgroup can further distribute
447 a resource only if the resource has been distributed to it from the
448 parent. This means that all non-root "cgroup.subtree_control" files
449 can only contain controllers which are enabled in the parent's
450 "cgroup.subtree_control" file. A controller can be enabled only if
451 the parent has the controller enabled and a controller can't be
452 disabled if one or more children have it enabled.
455 No Internal Process Constraint
456 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
458 Non-root cgroups can distribute domain resources to their children
459 only when they don't have any processes of their own. In other words,
460 only domain cgroups which don't contain any processes can have domain
461 controllers enabled in their "cgroup.subtree_control" files.
463 This guarantees that, when a domain controller is looking at the part
464 of the hierarchy which has it enabled, processes are always only on
465 the leaves. This rules out situations where child cgroups compete
466 against internal processes of the parent.
468 The root cgroup is exempt from this restriction. Root contains
469 processes and anonymous resource consumption which can't be associated
470 with any other cgroups and requires special treatment from most
471 controllers. How resource consumption in the root cgroup is governed
472 is up to each controller (for more information on this topic please
473 refer to the Non-normative information section in the Controllers
476 Note that the restriction doesn't get in the way if there is no
477 enabled controller in the cgroup's "cgroup.subtree_control". This is
478 important as otherwise it wouldn't be possible to create children of a
479 populated cgroup. To control resource distribution of a cgroup, the
480 cgroup must create children and transfer all its processes to the
481 children before enabling controllers in its "cgroup.subtree_control"
491 A cgroup can be delegated in two ways. First, to a less privileged
492 user by granting write access of the directory and its "cgroup.procs",
493 "cgroup.threads" and "cgroup.subtree_control" files to the user.
494 Second, if the "nsdelegate" mount option is set, automatically to a
495 cgroup namespace on namespace creation.
497 Because the resource control interface files in a given directory
498 control the distribution of the parent's resources, the delegatee
499 shouldn't be allowed to write to them. For the first method, this is
500 achieved by not granting access to these files. For the second, the
501 kernel rejects writes to all files other than "cgroup.procs" and
502 "cgroup.subtree_control" on a namespace root from inside the
505 The end results are equivalent for both delegation types. Once
506 delegated, the user can build sub-hierarchy under the directory,
507 organize processes inside it as it sees fit and further distribute the
508 resources it received from the parent. The limits and other settings
509 of all resource controllers are hierarchical and regardless of what
510 happens in the delegated sub-hierarchy, nothing can escape the
511 resource restrictions imposed by the parent.
513 Currently, cgroup doesn't impose any restrictions on the number of
514 cgroups in or nesting depth of a delegated sub-hierarchy; however,
515 this may be limited explicitly in the future.
518 Delegation Containment
519 ~~~~~~~~~~~~~~~~~~~~~~
521 A delegated sub-hierarchy is contained in the sense that processes
522 can't be moved into or out of the sub-hierarchy by the delegatee.
524 For delegations to a less privileged user, this is achieved by
525 requiring the following conditions for a process with a non-root euid
526 to migrate a target process into a cgroup by writing its PID to the
529 - The writer must have write access to the "cgroup.procs" file.
531 - The writer must have write access to the "cgroup.procs" file of the
532 common ancestor of the source and destination cgroups.
534 The above two constraints ensure that while a delegatee may migrate
535 processes around freely in the delegated sub-hierarchy it can't pull
536 in from or push out to outside the sub-hierarchy.
538 For an example, let's assume cgroups C0 and C1 have been delegated to
539 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
540 all processes under C0 and C1 belong to U0::
542 ~~~~~~~~~~~~~ - C0 - C00
545 ~~~~~~~~~~~~~ - C1 - C10
547 Let's also say U0 wants to write the PID of a process which is
548 currently in C10 into "C00/cgroup.procs". U0 has write access to the
549 file; however, the common ancestor of the source cgroup C10 and the
550 destination cgroup C00 is above the points of delegation and U0 would
551 not have write access to its "cgroup.procs" files and thus the write
552 will be denied with -EACCES.
554 For delegations to namespaces, containment is achieved by requiring
555 that both the source and destination cgroups are reachable from the
556 namespace of the process which is attempting the migration. If either
557 is not reachable, the migration is rejected with -ENOENT.
563 Organize Once and Control
564 ~~~~~~~~~~~~~~~~~~~~~~~~~
566 Migrating a process across cgroups is a relatively expensive operation
567 and stateful resources such as memory are not moved together with the
568 process. This is an explicit design decision as there often exist
569 inherent trade-offs between migration and various hot paths in terms
570 of synchronization cost.
572 As such, migrating processes across cgroups frequently as a means to
573 apply different resource restrictions is discouraged. A workload
574 should be assigned to a cgroup according to the system's logical and
575 resource structure once on start-up. Dynamic adjustments to resource
576 distribution can be made by changing controller configuration through
580 Avoid Name Collisions
581 ~~~~~~~~~~~~~~~~~~~~~
583 Interface files for a cgroup and its children cgroups occupy the same
584 directory and it is possible to create children cgroups which collide
585 with interface files.
587 All cgroup core interface files are prefixed with "cgroup." and each
588 controller's interface files are prefixed with the controller name and
589 a dot. A controller's name is composed of lower case alphabets and
590 '_'s but never begins with an '_' so it can be used as the prefix
591 character for collision avoidance. Also, interface file names won't
592 start or end with terms which are often used in categorizing workloads
593 such as job, service, slice, unit or workload.
595 cgroup doesn't do anything to prevent name collisions and it's the
596 user's responsibility to avoid them.
599 Resource Distribution Models
600 ============================
602 cgroup controllers implement several resource distribution schemes
603 depending on the resource type and expected use cases. This section
604 describes major schemes in use along with their expected behaviors.
610 A parent's resource is distributed by adding up the weights of all
611 active children and giving each the fraction matching the ratio of its
612 weight against the sum. As only children which can make use of the
613 resource at the moment participate in the distribution, this is
614 work-conserving. Due to the dynamic nature, this model is usually
615 used for stateless resources.
617 All weights are in the range [1, 10000] with the default at 100. This
618 allows symmetric multiplicative biases in both directions at fine
619 enough granularity while staying in the intuitive range.
621 As long as the weight is in range, all configuration combinations are
622 valid and there is no reason to reject configuration changes or
625 "cpu.weight" proportionally distributes CPU cycles to active children
626 and is an example of this type.
629 .. _cgroupv2-limits-distributor:
634 A child can only consume up to the configured amount of the resource.
635 Limits can be over-committed - the sum of the limits of children can
636 exceed the amount of resource available to the parent.
638 Limits are in the range [0, max] and defaults to "max", which is noop.
640 As limits can be over-committed, all configuration combinations are
641 valid and there is no reason to reject configuration changes or
644 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
645 on an IO device and is an example of this type.
647 .. _cgroupv2-protections-distributor:
652 A cgroup is protected up to the configured amount of the resource
653 as long as the usages of all its ancestors are under their
654 protected levels. Protections can be hard guarantees or best effort
655 soft boundaries. Protections can also be over-committed in which case
656 only up to the amount available to the parent is protected among
659 Protections are in the range [0, max] and defaults to 0, which is
662 As protections can be over-committed, all configuration combinations
663 are valid and there is no reason to reject configuration changes or
666 "memory.low" implements best-effort memory protection and is an
667 example of this type.
673 A cgroup is exclusively allocated a certain amount of a finite
674 resource. Allocations can't be over-committed - the sum of the
675 allocations of children can not exceed the amount of resource
676 available to the parent.
678 Allocations are in the range [0, max] and defaults to 0, which is no
681 As allocations can't be over-committed, some configuration
682 combinations are invalid and should be rejected. Also, if the
683 resource is mandatory for execution of processes, process migrations
686 "cpu.rt.max" hard-allocates realtime slices and is an example of this
696 All interface files should be in one of the following formats whenever
699 New-line separated values
700 (when only one value can be written at once)
706 Space separated values
707 (when read-only or multiple values can be written at once)
719 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
720 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
723 For a writable file, the format for writing should generally match
724 reading; however, controllers may allow omitting later fields or
725 implement restricted shortcuts for most common use cases.
727 For both flat and nested keyed files, only the values for a single key
728 can be written at a time. For nested keyed files, the sub key pairs
729 may be specified in any order and not all pairs have to be specified.
735 - Settings for a single feature should be contained in a single file.
737 - The root cgroup should be exempt from resource control and thus
738 shouldn't have resource control interface files.
740 - The default time unit is microseconds. If a different unit is ever
741 used, an explicit unit suffix must be present.
743 - A parts-per quantity should use a percentage decimal with at least
744 two digit fractional part - e.g. 13.40.
746 - If a controller implements weight based resource distribution, its
747 interface file should be named "weight" and have the range [1,
748 10000] with 100 as the default. The values are chosen to allow
749 enough and symmetric bias in both directions while keeping it
750 intuitive (the default is 100%).
752 - If a controller implements an absolute resource guarantee and/or
753 limit, the interface files should be named "min" and "max"
754 respectively. If a controller implements best effort resource
755 guarantee and/or limit, the interface files should be named "low"
756 and "high" respectively.
758 In the above four control files, the special token "max" should be
759 used to represent upward infinity for both reading and writing.
761 - If a setting has a configurable default value and keyed specific
762 overrides, the default entry should be keyed with "default" and
763 appear as the first entry in the file.
765 The default value can be updated by writing either "default $VAL" or
768 When writing to update a specific override, "default" can be used as
769 the value to indicate removal of the override. Override entries
770 with "default" as the value must not appear when read.
772 For example, a setting which is keyed by major:minor device numbers
773 with integer values may look like the following::
775 # cat cgroup-example-interface-file
779 The default value can be updated by::
781 # echo 125 > cgroup-example-interface-file
785 # echo "default 125" > cgroup-example-interface-file
787 An override can be set by::
789 # echo "8:16 170" > cgroup-example-interface-file
793 # echo "8:0 default" > cgroup-example-interface-file
794 # cat cgroup-example-interface-file
798 - For events which are not very high frequency, an interface file
799 "events" should be created which lists event key value pairs.
800 Whenever a notifiable event happens, file modified event should be
801 generated on the file.
807 All cgroup core files are prefixed with "cgroup."
810 A read-write single value file which exists on non-root
813 When read, it indicates the current type of the cgroup, which
814 can be one of the following values.
816 - "domain" : A normal valid domain cgroup.
818 - "domain threaded" : A threaded domain cgroup which is
819 serving as the root of a threaded subtree.
821 - "domain invalid" : A cgroup which is in an invalid state.
822 It can't be populated or have controllers enabled. It may
823 be allowed to become a threaded cgroup.
825 - "threaded" : A threaded cgroup which is a member of a
828 A cgroup can be turned into a threaded cgroup by writing
829 "threaded" to this file.
832 A read-write new-line separated values file which exists on
835 When read, it lists the PIDs of all processes which belong to
836 the cgroup one-per-line. The PIDs are not ordered and the
837 same PID may show up more than once if the process got moved
838 to another cgroup and then back or the PID got recycled while
841 A PID can be written to migrate the process associated with
842 the PID to the cgroup. The writer should match all of the
843 following conditions.
845 - It must have write access to the "cgroup.procs" file.
847 - It must have write access to the "cgroup.procs" file of the
848 common ancestor of the source and destination cgroups.
850 When delegating a sub-hierarchy, write access to this file
851 should be granted along with the containing directory.
853 In a threaded cgroup, reading this file fails with EOPNOTSUPP
854 as all the processes belong to the thread root. Writing is
855 supported and moves every thread of the process to the cgroup.
858 A read-write new-line separated values file which exists on
861 When read, it lists the TIDs of all threads which belong to
862 the cgroup one-per-line. The TIDs are not ordered and the
863 same TID may show up more than once if the thread got moved to
864 another cgroup and then back or the TID got recycled while
867 A TID can be written to migrate the thread associated with the
868 TID to the cgroup. The writer should match all of the
869 following conditions.
871 - It must have write access to the "cgroup.threads" file.
873 - The cgroup that the thread is currently in must be in the
874 same resource domain as the destination cgroup.
876 - It must have write access to the "cgroup.procs" file of the
877 common ancestor of the source and destination cgroups.
879 When delegating a sub-hierarchy, write access to this file
880 should be granted along with the containing directory.
883 A read-only space separated values file which exists on all
886 It shows space separated list of all controllers available to
887 the cgroup. The controllers are not ordered.
889 cgroup.subtree_control
890 A read-write space separated values file which exists on all
891 cgroups. Starts out empty.
893 When read, it shows space separated list of the controllers
894 which are enabled to control resource distribution from the
895 cgroup to its children.
897 Space separated list of controllers prefixed with '+' or '-'
898 can be written to enable or disable controllers. A controller
899 name prefixed with '+' enables the controller and '-'
900 disables. If a controller appears more than once on the list,
901 the last one is effective. When multiple enable and disable
902 operations are specified, either all succeed or all fail.
905 A read-only flat-keyed file which exists on non-root cgroups.
906 The following entries are defined. Unless specified
907 otherwise, a value change in this file generates a file
911 1 if the cgroup or its descendants contains any live
912 processes; otherwise, 0.
914 1 if the cgroup is frozen; otherwise, 0.
916 cgroup.max.descendants
917 A read-write single value files. The default is "max".
919 Maximum allowed number of descent cgroups.
920 If the actual number of descendants is equal or larger,
921 an attempt to create a new cgroup in the hierarchy will fail.
924 A read-write single value files. The default is "max".
926 Maximum allowed descent depth below the current cgroup.
927 If the actual descent depth is equal or larger,
928 an attempt to create a new child cgroup will fail.
931 A read-only flat-keyed file with the following entries:
934 Total number of visible descendant cgroups.
937 Total number of dying descendant cgroups. A cgroup becomes
938 dying after being deleted by a user. The cgroup will remain
939 in dying state for some time undefined time (which can depend
940 on system load) before being completely destroyed.
942 A process can't enter a dying cgroup under any circumstances,
943 a dying cgroup can't revive.
945 A dying cgroup can consume system resources not exceeding
946 limits, which were active at the moment of cgroup deletion.
949 A read-write single value file which exists on non-root cgroups.
950 Allowed values are "0" and "1". The default is "0".
952 Writing "1" to the file causes freezing of the cgroup and all
953 descendant cgroups. This means that all belonging processes will
954 be stopped and will not run until the cgroup will be explicitly
955 unfrozen. Freezing of the cgroup may take some time; when this action
956 is completed, the "frozen" value in the cgroup.events control file
957 will be updated to "1" and the corresponding notification will be
960 A cgroup can be frozen either by its own settings, or by settings
961 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
962 cgroup will remain frozen.
964 Processes in the frozen cgroup can be killed by a fatal signal.
965 They also can enter and leave a frozen cgroup: either by an explicit
966 move by a user, or if freezing of the cgroup races with fork().
967 If a process is moved to a frozen cgroup, it stops. If a process is
968 moved out of a frozen cgroup, it becomes running.
970 Frozen status of a cgroup doesn't affect any cgroup tree operations:
971 it's possible to delete a frozen (and empty) cgroup, as well as
972 create new sub-cgroups.
975 A write-only single value file which exists in non-root cgroups.
976 The only allowed value is "1".
978 Writing "1" to the file causes the cgroup and all descendant cgroups to
979 be killed. This means that all processes located in the affected cgroup
980 tree will be killed via SIGKILL.
982 Killing a cgroup tree will deal with concurrent forks appropriately and
983 is protected against migrations.
985 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
986 killing cgroups is a process directed operation, i.e. it affects
987 the whole thread-group.
990 A read-write single value file that allowed values are "0" and "1".
993 Writing "0" to the file will disable the cgroup PSI accounting.
994 Writing "1" to the file will re-enable the cgroup PSI accounting.
996 This control attribute is not hierarchical, so disable or enable PSI
997 accounting in a cgroup does not affect PSI accounting in descendants
998 and doesn't need pass enablement via ancestors from root.
1000 The reason this control attribute exists is that PSI accounts stalls for
1001 each cgroup separately and aggregates it at each level of the hierarchy.
1002 This may cause non-negligible overhead for some workloads when under
1003 deep level of the hierarchy, in which case this control attribute can
1004 be used to disable PSI accounting in the non-leaf cgroups.
1007 A read-write nested-keyed file.
1009 Shows pressure stall information for IRQ/SOFTIRQ. See
1010 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1020 The "cpu" controllers regulates distribution of CPU cycles. This
1021 controller implements weight and absolute bandwidth limit models for
1022 normal scheduling policy and absolute bandwidth allocation model for
1023 realtime scheduling policy.
1025 In all the above models, cycles distribution is defined only on a temporal
1026 base and it does not account for the frequency at which tasks are executed.
1027 The (optional) utilization clamping support allows to hint the schedutil
1028 cpufreq governor about the minimum desired frequency which should always be
1029 provided by a CPU, as well as the maximum desired frequency, which should not
1030 be exceeded by a CPU.
1032 WARNING: cgroup2 doesn't yet support control of realtime processes and
1033 the cpu controller can only be enabled when all RT processes are in
1034 the root cgroup. Be aware that system management software may already
1035 have placed RT processes into nonroot cgroups during the system boot
1036 process, and these processes may need to be moved to the root cgroup
1037 before the cpu controller can be enabled.
1043 All time durations are in microseconds.
1046 A read-only flat-keyed file.
1047 This file exists whether the controller is enabled or not.
1049 It always reports the following three stats:
1055 and the following five when the controller is enabled:
1064 A read-write single value file which exists on non-root
1065 cgroups. The default is "100".
1067 The weight in the range [1, 10000].
1070 A read-write single value file which exists on non-root
1071 cgroups. The default is "0".
1073 The nice value is in the range [-20, 19].
1075 This interface file is an alternative interface for
1076 "cpu.weight" and allows reading and setting weight using the
1077 same values used by nice(2). Because the range is smaller and
1078 granularity is coarser for the nice values, the read value is
1079 the closest approximation of the current weight.
1082 A read-write two value file which exists on non-root cgroups.
1083 The default is "max 100000".
1085 The maximum bandwidth limit. It's in the following format::
1089 which indicates that the group may consume up to $MAX in each
1090 $PERIOD duration. "max" for $MAX indicates no limit. If only
1091 one number is written, $MAX is updated.
1094 A read-write single value file which exists on non-root
1095 cgroups. The default is "0".
1097 The burst in the range [0, $MAX].
1100 A read-write nested-keyed file.
1102 Shows pressure stall information for CPU. See
1103 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1106 A read-write single value file which exists on non-root cgroups.
1107 The default is "0", i.e. no utilization boosting.
1109 The requested minimum utilization (protection) as a percentage
1110 rational number, e.g. 12.34 for 12.34%.
1112 This interface allows reading and setting minimum utilization clamp
1113 values similar to the sched_setattr(2). This minimum utilization
1114 value is used to clamp the task specific minimum utilization clamp.
1116 The requested minimum utilization (protection) is always capped by
1117 the current value for the maximum utilization (limit), i.e.
1121 A read-write single value file which exists on non-root cgroups.
1122 The default is "max". i.e. no utilization capping
1124 The requested maximum utilization (limit) as a percentage rational
1125 number, e.g. 98.76 for 98.76%.
1127 This interface allows reading and setting maximum utilization clamp
1128 values similar to the sched_setattr(2). This maximum utilization
1129 value is used to clamp the task specific maximum utilization clamp.
1136 The "memory" controller regulates distribution of memory. Memory is
1137 stateful and implements both limit and protection models. Due to the
1138 intertwining between memory usage and reclaim pressure and the
1139 stateful nature of memory, the distribution model is relatively
1142 While not completely water-tight, all major memory usages by a given
1143 cgroup are tracked so that the total memory consumption can be
1144 accounted and controlled to a reasonable extent. Currently, the
1145 following types of memory usages are tracked.
1147 - Userland memory - page cache and anonymous memory.
1149 - Kernel data structures such as dentries and inodes.
1151 - TCP socket buffers.
1153 The above list may expand in the future for better coverage.
1156 Memory Interface Files
1157 ~~~~~~~~~~~~~~~~~~~~~~
1159 All memory amounts are in bytes. If a value which is not aligned to
1160 PAGE_SIZE is written, the value may be rounded up to the closest
1161 PAGE_SIZE multiple when read back.
1164 A read-only single value file which exists on non-root
1167 The total amount of memory currently being used by the cgroup
1168 and its descendants.
1171 A read-write single value file which exists on non-root
1172 cgroups. The default is "0".
1174 Hard memory protection. If the memory usage of a cgroup
1175 is within its effective min boundary, the cgroup's memory
1176 won't be reclaimed under any conditions. If there is no
1177 unprotected reclaimable memory available, OOM killer
1178 is invoked. Above the effective min boundary (or
1179 effective low boundary if it is higher), pages are reclaimed
1180 proportionally to the overage, reducing reclaim pressure for
1183 Effective min boundary is limited by memory.min values of
1184 all ancestor cgroups. If there is memory.min overcommitment
1185 (child cgroup or cgroups are requiring more protected memory
1186 than parent will allow), then each child cgroup will get
1187 the part of parent's protection proportional to its
1188 actual memory usage below memory.min.
1190 Putting more memory than generally available under this
1191 protection is discouraged and may lead to constant OOMs.
1193 If a memory cgroup is not populated with processes,
1194 its memory.min is ignored.
1197 A read-write single value file which exists on non-root
1198 cgroups. The default is "0".
1200 Best-effort memory protection. If the memory usage of a
1201 cgroup is within its effective low boundary, the cgroup's
1202 memory won't be reclaimed unless there is no reclaimable
1203 memory available in unprotected cgroups.
1204 Above the effective low boundary (or
1205 effective min boundary if it is higher), pages are reclaimed
1206 proportionally to the overage, reducing reclaim pressure for
1209 Effective low boundary is limited by memory.low values of
1210 all ancestor cgroups. If there is memory.low overcommitment
1211 (child cgroup or cgroups are requiring more protected memory
1212 than parent will allow), then each child cgroup will get
1213 the part of parent's protection proportional to its
1214 actual memory usage below memory.low.
1216 Putting more memory than generally available under this
1217 protection is discouraged.
1220 A read-write single value file which exists on non-root
1221 cgroups. The default is "max".
1223 Memory usage throttle limit. If a cgroup's usage goes
1224 over the high boundary, the processes of the cgroup are
1225 throttled and put under heavy reclaim pressure.
1227 Going over the high limit never invokes the OOM killer and
1228 under extreme conditions the limit may be breached. The high
1229 limit should be used in scenarios where an external process
1230 monitors the limited cgroup to alleviate heavy reclaim
1234 A read-write single value file which exists on non-root
1235 cgroups. The default is "max".
1237 Memory usage hard limit. This is the main mechanism to limit
1238 memory usage of a cgroup. If a cgroup's memory usage reaches
1239 this limit and can't be reduced, the OOM killer is invoked in
1240 the cgroup. Under certain circumstances, the usage may go
1241 over the limit temporarily.
1243 In default configuration regular 0-order allocations always
1244 succeed unless OOM killer chooses current task as a victim.
1246 Some kinds of allocations don't invoke the OOM killer.
1247 Caller could retry them differently, return into userspace
1248 as -ENOMEM or silently ignore in cases like disk readahead.
1251 A write-only nested-keyed file which exists for all cgroups.
1253 This is a simple interface to trigger memory reclaim in the
1256 This file accepts a single key, the number of bytes to reclaim.
1257 No nested keys are currently supported.
1261 echo "1G" > memory.reclaim
1263 The interface can be later extended with nested keys to
1264 configure the reclaim behavior. For example, specify the
1265 type of memory to reclaim from (anon, file, ..).
1267 Please note that the kernel can over or under reclaim from
1268 the target cgroup. If less bytes are reclaimed than the
1269 specified amount, -EAGAIN is returned.
1271 Please note that the proactive reclaim (triggered by this
1272 interface) is not meant to indicate memory pressure on the
1273 memory cgroup. Therefore socket memory balancing triggered by
1274 the memory reclaim normally is not exercised in this case.
1275 This means that the networking layer will not adapt based on
1276 reclaim induced by memory.reclaim.
1279 A read-only single value file which exists on non-root
1282 The max memory usage recorded for the cgroup and its
1283 descendants since the creation of the cgroup.
1286 A read-write single value file which exists on non-root
1287 cgroups. The default value is "0".
1289 Determines whether the cgroup should be treated as
1290 an indivisible workload by the OOM killer. If set,
1291 all tasks belonging to the cgroup or to its descendants
1292 (if the memory cgroup is not a leaf cgroup) are killed
1293 together or not at all. This can be used to avoid
1294 partial kills to guarantee workload integrity.
1296 Tasks with the OOM protection (oom_score_adj set to -1000)
1297 are treated as an exception and are never killed.
1299 If the OOM killer is invoked in a cgroup, it's not going
1300 to kill any tasks outside of this cgroup, regardless
1301 memory.oom.group values of ancestor cgroups.
1304 A read-only flat-keyed file which exists on non-root cgroups.
1305 The following entries are defined. Unless specified
1306 otherwise, a value change in this file generates a file
1309 Note that all fields in this file are hierarchical and the
1310 file modified event can be generated due to an event down the
1311 hierarchy. For the local events at the cgroup level see
1312 memory.events.local.
1315 The number of times the cgroup is reclaimed due to
1316 high memory pressure even though its usage is under
1317 the low boundary. This usually indicates that the low
1318 boundary is over-committed.
1321 The number of times processes of the cgroup are
1322 throttled and routed to perform direct memory reclaim
1323 because the high memory boundary was exceeded. For a
1324 cgroup whose memory usage is capped by the high limit
1325 rather than global memory pressure, this event's
1326 occurrences are expected.
1329 The number of times the cgroup's memory usage was
1330 about to go over the max boundary. If direct reclaim
1331 fails to bring it down, the cgroup goes to OOM state.
1334 The number of time the cgroup's memory usage was
1335 reached the limit and allocation was about to fail.
1337 This event is not raised if the OOM killer is not
1338 considered as an option, e.g. for failed high-order
1339 allocations or if caller asked to not retry attempts.
1342 The number of processes belonging to this cgroup
1343 killed by any kind of OOM killer.
1346 The number of times a group OOM has occurred.
1349 Similar to memory.events but the fields in the file are local
1350 to the cgroup i.e. not hierarchical. The file modified event
1351 generated on this file reflects only the local events.
1354 A read-only flat-keyed file which exists on non-root cgroups.
1356 This breaks down the cgroup's memory footprint into different
1357 types of memory, type-specific details, and other information
1358 on the state and past events of the memory management system.
1360 All memory amounts are in bytes.
1362 The entries are ordered to be human readable, and new entries
1363 can show up in the middle. Don't rely on items remaining in a
1364 fixed position; use the keys to look up specific values!
1366 If the entry has no per-node counter (or not show in the
1367 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1368 to indicate that it will not show in the memory.numa_stat.
1371 Amount of memory used in anonymous mappings such as
1372 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1375 Amount of memory used to cache filesystem data,
1376 including tmpfs and shared memory.
1379 Amount of total kernel memory, including
1380 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1381 addition to other kernel memory use cases.
1384 Amount of memory allocated to kernel stacks.
1387 Amount of memory allocated for page tables.
1390 Amount of memory allocated for secondary page tables,
1391 this currently includes KVM mmu allocations on x86
1395 Amount of memory used for storing per-cpu kernel
1399 Amount of memory used in network transmission buffers
1402 Amount of memory used for vmap backed memory.
1405 Amount of cached filesystem data that is swap-backed,
1406 such as tmpfs, shm segments, shared anonymous mmap()s
1409 Amount of memory consumed by the zswap compression backend.
1412 Amount of application memory swapped out to zswap.
1415 Amount of cached filesystem data mapped with mmap()
1418 Amount of cached filesystem data that was modified but
1419 not yet written back to disk
1422 Amount of cached filesystem data that was modified and
1423 is currently being written back to disk
1426 Amount of swap cached in memory. The swapcache is accounted
1427 against both memory and swap usage.
1430 Amount of memory used in anonymous mappings backed by
1431 transparent hugepages
1434 Amount of cached filesystem data backed by transparent
1438 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1439 transparent hugepages
1441 inactive_anon, active_anon, inactive_file, active_file, unevictable
1442 Amount of memory, swap-backed and filesystem-backed,
1443 on the internal memory management lists used by the
1444 page reclaim algorithm.
1446 As these represent internal list state (eg. shmem pages are on anon
1447 memory management lists), inactive_foo + active_foo may not be equal to
1448 the value for the foo counter, since the foo counter is type-based, not
1452 Part of "slab" that might be reclaimed, such as
1453 dentries and inodes.
1456 Part of "slab" that cannot be reclaimed on memory
1460 Amount of memory used for storing in-kernel data
1463 workingset_refault_anon
1464 Number of refaults of previously evicted anonymous pages.
1466 workingset_refault_file
1467 Number of refaults of previously evicted file pages.
1469 workingset_activate_anon
1470 Number of refaulted anonymous pages that were immediately
1473 workingset_activate_file
1474 Number of refaulted file pages that were immediately activated.
1476 workingset_restore_anon
1477 Number of restored anonymous pages which have been detected as
1478 an active workingset before they got reclaimed.
1480 workingset_restore_file
1481 Number of restored file pages which have been detected as an
1482 active workingset before they got reclaimed.
1484 workingset_nodereclaim
1485 Number of times a shadow node has been reclaimed
1488 Amount of scanned pages (in an inactive LRU list)
1491 Amount of reclaimed pages
1494 Amount of scanned pages by kswapd (in an inactive LRU list)
1497 Amount of scanned pages directly (in an inactive LRU list)
1499 pgscan_khugepaged (npn)
1500 Amount of scanned pages by khugepaged (in an inactive LRU list)
1502 pgsteal_kswapd (npn)
1503 Amount of reclaimed pages by kswapd
1505 pgsteal_direct (npn)
1506 Amount of reclaimed pages directly
1508 pgsteal_khugepaged (npn)
1509 Amount of reclaimed pages by khugepaged
1512 Total number of page faults incurred
1515 Number of major page faults incurred
1518 Amount of scanned pages (in an active LRU list)
1521 Amount of pages moved to the active LRU list
1524 Amount of pages moved to the inactive LRU list
1527 Amount of pages postponed to be freed under memory pressure
1530 Amount of reclaimed lazyfree pages
1532 thp_fault_alloc (npn)
1533 Number of transparent hugepages which were allocated to satisfy
1534 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1537 thp_collapse_alloc (npn)
1538 Number of transparent hugepages which were allocated to allow
1539 collapsing an existing range of pages. This counter is not
1540 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1543 A read-only nested-keyed file which exists on non-root cgroups.
1545 This breaks down the cgroup's memory footprint into different
1546 types of memory, type-specific details, and other information
1547 per node on the state of the memory management system.
1549 This is useful for providing visibility into the NUMA locality
1550 information within an memcg since the pages are allowed to be
1551 allocated from any physical node. One of the use case is evaluating
1552 application performance by combining this information with the
1553 application's CPU allocation.
1555 All memory amounts are in bytes.
1557 The output format of memory.numa_stat is::
1559 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1561 The entries are ordered to be human readable, and new entries
1562 can show up in the middle. Don't rely on items remaining in a
1563 fixed position; use the keys to look up specific values!
1565 The entries can refer to the memory.stat.
1568 A read-only single value file which exists on non-root
1571 The total amount of swap currently being used by the cgroup
1572 and its descendants.
1575 A read-write single value file which exists on non-root
1576 cgroups. The default is "max".
1578 Swap usage throttle limit. If a cgroup's swap usage exceeds
1579 this limit, all its further allocations will be throttled to
1580 allow userspace to implement custom out-of-memory procedures.
1582 This limit marks a point of no return for the cgroup. It is NOT
1583 designed to manage the amount of swapping a workload does
1584 during regular operation. Compare to memory.swap.max, which
1585 prohibits swapping past a set amount, but lets the cgroup
1586 continue unimpeded as long as other memory can be reclaimed.
1588 Healthy workloads are not expected to reach this limit.
1591 A read-only single value file which exists on non-root
1594 The max swap usage recorded for the cgroup and its
1595 descendants since the creation of the cgroup.
1598 A read-write single value file which exists on non-root
1599 cgroups. The default is "max".
1601 Swap usage hard limit. If a cgroup's swap usage reaches this
1602 limit, anonymous memory of the cgroup will not be swapped out.
1605 A read-only flat-keyed file which exists on non-root cgroups.
1606 The following entries are defined. Unless specified
1607 otherwise, a value change in this file generates a file
1611 The number of times the cgroup's swap usage was over
1615 The number of times the cgroup's swap usage was about
1616 to go over the max boundary and swap allocation
1620 The number of times swap allocation failed either
1621 because of running out of swap system-wide or max
1624 When reduced under the current usage, the existing swap
1625 entries are reclaimed gradually and the swap usage may stay
1626 higher than the limit for an extended period of time. This
1627 reduces the impact on the workload and memory management.
1629 memory.zswap.current
1630 A read-only single value file which exists on non-root
1633 The total amount of memory consumed by the zswap compression
1637 A read-write single value file which exists on non-root
1638 cgroups. The default is "max".
1640 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1641 limit, it will refuse to take any more stores before existing
1642 entries fault back in or are written out to disk.
1645 A read-only nested-keyed file.
1647 Shows pressure stall information for memory. See
1648 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1654 "memory.high" is the main mechanism to control memory usage.
1655 Over-committing on high limit (sum of high limits > available memory)
1656 and letting global memory pressure to distribute memory according to
1657 usage is a viable strategy.
1659 Because breach of the high limit doesn't trigger the OOM killer but
1660 throttles the offending cgroup, a management agent has ample
1661 opportunities to monitor and take appropriate actions such as granting
1662 more memory or terminating the workload.
1664 Determining whether a cgroup has enough memory is not trivial as
1665 memory usage doesn't indicate whether the workload can benefit from
1666 more memory. For example, a workload which writes data received from
1667 network to a file can use all available memory but can also operate as
1668 performant with a small amount of memory. A measure of memory
1669 pressure - how much the workload is being impacted due to lack of
1670 memory - is necessary to determine whether a workload needs more
1671 memory; unfortunately, memory pressure monitoring mechanism isn't
1678 A memory area is charged to the cgroup which instantiated it and stays
1679 charged to the cgroup until the area is released. Migrating a process
1680 to a different cgroup doesn't move the memory usages that it
1681 instantiated while in the previous cgroup to the new cgroup.
1683 A memory area may be used by processes belonging to different cgroups.
1684 To which cgroup the area will be charged is in-deterministic; however,
1685 over time, the memory area is likely to end up in a cgroup which has
1686 enough memory allowance to avoid high reclaim pressure.
1688 If a cgroup sweeps a considerable amount of memory which is expected
1689 to be accessed repeatedly by other cgroups, it may make sense to use
1690 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1691 belonging to the affected files to ensure correct memory ownership.
1697 The "io" controller regulates the distribution of IO resources. This
1698 controller implements both weight based and absolute bandwidth or IOPS
1699 limit distribution; however, weight based distribution is available
1700 only if cfq-iosched is in use and neither scheme is available for
1708 A read-only nested-keyed file.
1710 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1711 The following nested keys are defined.
1713 ====== =====================
1715 wbytes Bytes written
1716 rios Number of read IOs
1717 wios Number of write IOs
1718 dbytes Bytes discarded
1719 dios Number of discard IOs
1720 ====== =====================
1722 An example read output follows::
1724 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1725 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1728 A read-write nested-keyed file which exists only on the root
1731 This file configures the Quality of Service of the IO cost
1732 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1733 currently implements "io.weight" proportional control. Lines
1734 are keyed by $MAJ:$MIN device numbers and not ordered. The
1735 line for a given device is populated on the first write for
1736 the device on "io.cost.qos" or "io.cost.model". The following
1737 nested keys are defined.
1739 ====== =====================================
1740 enable Weight-based control enable
1741 ctrl "auto" or "user"
1742 rpct Read latency percentile [0, 100]
1743 rlat Read latency threshold
1744 wpct Write latency percentile [0, 100]
1745 wlat Write latency threshold
1746 min Minimum scaling percentage [1, 10000]
1747 max Maximum scaling percentage [1, 10000]
1748 ====== =====================================
1750 The controller is disabled by default and can be enabled by
1751 setting "enable" to 1. "rpct" and "wpct" parameters default
1752 to zero and the controller uses internal device saturation
1753 state to adjust the overall IO rate between "min" and "max".
1755 When a better control quality is needed, latency QoS
1756 parameters can be configured. For example::
1758 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1760 shows that on sdb, the controller is enabled, will consider
1761 the device saturated if the 95th percentile of read completion
1762 latencies is above 75ms or write 150ms, and adjust the overall
1763 IO issue rate between 50% and 150% accordingly.
1765 The lower the saturation point, the better the latency QoS at
1766 the cost of aggregate bandwidth. The narrower the allowed
1767 adjustment range between "min" and "max", the more conformant
1768 to the cost model the IO behavior. Note that the IO issue
1769 base rate may be far off from 100% and setting "min" and "max"
1770 blindly can lead to a significant loss of device capacity or
1771 control quality. "min" and "max" are useful for regulating
1772 devices which show wide temporary behavior changes - e.g. a
1773 ssd which accepts writes at the line speed for a while and
1774 then completely stalls for multiple seconds.
1776 When "ctrl" is "auto", the parameters are controlled by the
1777 kernel and may change automatically. Setting "ctrl" to "user"
1778 or setting any of the percentile and latency parameters puts
1779 it into "user" mode and disables the automatic changes. The
1780 automatic mode can be restored by setting "ctrl" to "auto".
1783 A read-write nested-keyed file which exists only on the root
1786 This file configures the cost model of the IO cost model based
1787 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1788 implements "io.weight" proportional control. Lines are keyed
1789 by $MAJ:$MIN device numbers and not ordered. The line for a
1790 given device is populated on the first write for the device on
1791 "io.cost.qos" or "io.cost.model". The following nested keys
1794 ===== ================================
1795 ctrl "auto" or "user"
1796 model The cost model in use - "linear"
1797 ===== ================================
1799 When "ctrl" is "auto", the kernel may change all parameters
1800 dynamically. When "ctrl" is set to "user" or any other
1801 parameters are written to, "ctrl" become "user" and the
1802 automatic changes are disabled.
1804 When "model" is "linear", the following model parameters are
1807 ============= ========================================
1808 [r|w]bps The maximum sequential IO throughput
1809 [r|w]seqiops The maximum 4k sequential IOs per second
1810 [r|w]randiops The maximum 4k random IOs per second
1811 ============= ========================================
1813 From the above, the builtin linear model determines the base
1814 costs of a sequential and random IO and the cost coefficient
1815 for the IO size. While simple, this model can cover most
1816 common device classes acceptably.
1818 The IO cost model isn't expected to be accurate in absolute
1819 sense and is scaled to the device behavior dynamically.
1821 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1822 generate device-specific coefficients.
1825 A read-write flat-keyed file which exists on non-root cgroups.
1826 The default is "default 100".
1828 The first line is the default weight applied to devices
1829 without specific override. The rest are overrides keyed by
1830 $MAJ:$MIN device numbers and not ordered. The weights are in
1831 the range [1, 10000] and specifies the relative amount IO time
1832 the cgroup can use in relation to its siblings.
1834 The default weight can be updated by writing either "default
1835 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1836 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1838 An example read output follows::
1845 A read-write nested-keyed file which exists on non-root
1848 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1849 device numbers and not ordered. The following nested keys are
1852 ===== ==================================
1853 rbps Max read bytes per second
1854 wbps Max write bytes per second
1855 riops Max read IO operations per second
1856 wiops Max write IO operations per second
1857 ===== ==================================
1859 When writing, any number of nested key-value pairs can be
1860 specified in any order. "max" can be specified as the value
1861 to remove a specific limit. If the same key is specified
1862 multiple times, the outcome is undefined.
1864 BPS and IOPS are measured in each IO direction and IOs are
1865 delayed if limit is reached. Temporary bursts are allowed.
1867 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1869 echo "8:16 rbps=2097152 wiops=120" > io.max
1871 Reading returns the following::
1873 8:16 rbps=2097152 wbps=max riops=max wiops=120
1875 Write IOPS limit can be removed by writing the following::
1877 echo "8:16 wiops=max" > io.max
1879 Reading now returns the following::
1881 8:16 rbps=2097152 wbps=max riops=max wiops=max
1884 A read-only nested-keyed file.
1886 Shows pressure stall information for IO. See
1887 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1893 Page cache is dirtied through buffered writes and shared mmaps and
1894 written asynchronously to the backing filesystem by the writeback
1895 mechanism. Writeback sits between the memory and IO domains and
1896 regulates the proportion of dirty memory by balancing dirtying and
1899 The io controller, in conjunction with the memory controller,
1900 implements control of page cache writeback IOs. The memory controller
1901 defines the memory domain that dirty memory ratio is calculated and
1902 maintained for and the io controller defines the io domain which
1903 writes out dirty pages for the memory domain. Both system-wide and
1904 per-cgroup dirty memory states are examined and the more restrictive
1905 of the two is enforced.
1907 cgroup writeback requires explicit support from the underlying
1908 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
1909 btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
1910 attributed to the root cgroup.
1912 There are inherent differences in memory and writeback management
1913 which affects how cgroup ownership is tracked. Memory is tracked per
1914 page while writeback per inode. For the purpose of writeback, an
1915 inode is assigned to a cgroup and all IO requests to write dirty pages
1916 from the inode are attributed to that cgroup.
1918 As cgroup ownership for memory is tracked per page, there can be pages
1919 which are associated with different cgroups than the one the inode is
1920 associated with. These are called foreign pages. The writeback
1921 constantly keeps track of foreign pages and, if a particular foreign
1922 cgroup becomes the majority over a certain period of time, switches
1923 the ownership of the inode to that cgroup.
1925 While this model is enough for most use cases where a given inode is
1926 mostly dirtied by a single cgroup even when the main writing cgroup
1927 changes over time, use cases where multiple cgroups write to a single
1928 inode simultaneously are not supported well. In such circumstances, a
1929 significant portion of IOs are likely to be attributed incorrectly.
1930 As memory controller assigns page ownership on the first use and
1931 doesn't update it until the page is released, even if writeback
1932 strictly follows page ownership, multiple cgroups dirtying overlapping
1933 areas wouldn't work as expected. It's recommended to avoid such usage
1936 The sysctl knobs which affect writeback behavior are applied to cgroup
1937 writeback as follows.
1939 vm.dirty_background_ratio, vm.dirty_ratio
1940 These ratios apply the same to cgroup writeback with the
1941 amount of available memory capped by limits imposed by the
1942 memory controller and system-wide clean memory.
1944 vm.dirty_background_bytes, vm.dirty_bytes
1945 For cgroup writeback, this is calculated into ratio against
1946 total available memory and applied the same way as
1947 vm.dirty[_background]_ratio.
1953 This is a cgroup v2 controller for IO workload protection. You provide a group
1954 with a latency target, and if the average latency exceeds that target the
1955 controller will throttle any peers that have a lower latency target than the
1958 The limits are only applied at the peer level in the hierarchy. This means that
1959 in the diagram below, only groups A, B, and C will influence each other, and
1960 groups D and F will influence each other. Group G will influence nobody::
1969 So the ideal way to configure this is to set io.latency in groups A, B, and C.
1970 Generally you do not want to set a value lower than the latency your device
1971 supports. Experiment to find the value that works best for your workload.
1972 Start at higher than the expected latency for your device and watch the
1973 avg_lat value in io.stat for your workload group to get an idea of the
1974 latency you see during normal operation. Use the avg_lat value as a basis for
1975 your real setting, setting at 10-15% higher than the value in io.stat.
1977 How IO Latency Throttling Works
1978 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1980 io.latency is work conserving; so as long as everybody is meeting their latency
1981 target the controller doesn't do anything. Once a group starts missing its
1982 target it begins throttling any peer group that has a higher target than itself.
1983 This throttling takes 2 forms:
1985 - Queue depth throttling. This is the number of outstanding IO's a group is
1986 allowed to have. We will clamp down relatively quickly, starting at no limit
1987 and going all the way down to 1 IO at a time.
1989 - Artificial delay induction. There are certain types of IO that cannot be
1990 throttled without possibly adversely affecting higher priority groups. This
1991 includes swapping and metadata IO. These types of IO are allowed to occur
1992 normally, however they are "charged" to the originating group. If the
1993 originating group is being throttled you will see the use_delay and delay
1994 fields in io.stat increase. The delay value is how many microseconds that are
1995 being added to any process that runs in this group. Because this number can
1996 grow quite large if there is a lot of swapping or metadata IO occurring we
1997 limit the individual delay events to 1 second at a time.
1999 Once the victimized group starts meeting its latency target again it will start
2000 unthrottling any peer groups that were throttled previously. If the victimized
2001 group simply stops doing IO the global counter will unthrottle appropriately.
2003 IO Latency Interface Files
2004 ~~~~~~~~~~~~~~~~~~~~~~~~~~
2007 This takes a similar format as the other controllers.
2009 "MAJOR:MINOR target=<target time in microseconds>"
2012 If the controller is enabled you will see extra stats in io.stat in
2013 addition to the normal ones.
2016 This is the current queue depth for the group.
2019 This is an exponential moving average with a decay rate of 1/exp
2020 bound by the sampling interval. The decay rate interval can be
2021 calculated by multiplying the win value in io.stat by the
2022 corresponding number of samples based on the win value.
2025 The sampling window size in milliseconds. This is the minimum
2026 duration of time between evaluation events. Windows only elapse
2027 with IO activity. Idle periods extend the most recent window.
2032 A single attribute controls the behavior of the I/O priority cgroup policy,
2033 namely the blkio.prio.class attribute. The following values are accepted for
2037 Do not modify the I/O priority class.
2040 For requests that have a non-RT I/O priority class, change it into RT.
2041 Also change the priority level of these requests to 4. Do not modify
2042 the I/O priority of requests that have priority class RT.
2045 For requests that do not have an I/O priority class or that have I/O
2046 priority class RT, change it into BE. Also change the priority level
2047 of these requests to 0. Do not modify the I/O priority class of
2048 requests that have priority class IDLE.
2051 Change the I/O priority class of all requests into IDLE, the lowest
2055 Deprecated. Just an alias for promote-to-rt.
2057 The following numerical values are associated with the I/O priority policies:
2059 +----------------+---+
2061 +----------------+---+
2063 +----------------+---+
2065 +----------------+---+
2067 The numerical value that corresponds to each I/O priority class is as follows:
2069 +-------------------------------+---+
2070 | IOPRIO_CLASS_NONE | 0 |
2071 +-------------------------------+---+
2072 | IOPRIO_CLASS_RT (real-time) | 1 |
2073 +-------------------------------+---+
2074 | IOPRIO_CLASS_BE (best effort) | 2 |
2075 +-------------------------------+---+
2076 | IOPRIO_CLASS_IDLE | 3 |
2077 +-------------------------------+---+
2079 The algorithm to set the I/O priority class for a request is as follows:
2081 - If I/O priority class policy is promote-to-rt, change the request I/O
2082 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2084 - If I/O priorityt class is not promote-to-rt, translate the I/O priority
2085 class policy into a number, then change the request I/O priority class
2086 into the maximum of the I/O priority class policy number and the numerical
2092 The process number controller is used to allow a cgroup to stop any
2093 new tasks from being fork()'d or clone()'d after a specified limit is
2096 The number of tasks in a cgroup can be exhausted in ways which other
2097 controllers cannot prevent, thus warranting its own controller. For
2098 example, a fork bomb is likely to exhaust the number of tasks before
2099 hitting memory restrictions.
2101 Note that PIDs used in this controller refer to TIDs, process IDs as
2109 A read-write single value file which exists on non-root
2110 cgroups. The default is "max".
2112 Hard limit of number of processes.
2115 A read-only single value file which exists on all cgroups.
2117 The number of processes currently in the cgroup and its
2120 Organisational operations are not blocked by cgroup policies, so it is
2121 possible to have pids.current > pids.max. This can be done by either
2122 setting the limit to be smaller than pids.current, or attaching enough
2123 processes to the cgroup such that pids.current is larger than
2124 pids.max. However, it is not possible to violate a cgroup PID policy
2125 through fork() or clone(). These will return -EAGAIN if the creation
2126 of a new process would cause a cgroup policy to be violated.
2132 The "cpuset" controller provides a mechanism for constraining
2133 the CPU and memory node placement of tasks to only the resources
2134 specified in the cpuset interface files in a task's current cgroup.
2135 This is especially valuable on large NUMA systems where placing jobs
2136 on properly sized subsets of the systems with careful processor and
2137 memory placement to reduce cross-node memory access and contention
2138 can improve overall system performance.
2140 The "cpuset" controller is hierarchical. That means the controller
2141 cannot use CPUs or memory nodes not allowed in its parent.
2144 Cpuset Interface Files
2145 ~~~~~~~~~~~~~~~~~~~~~~
2148 A read-write multiple values file which exists on non-root
2149 cpuset-enabled cgroups.
2151 It lists the requested CPUs to be used by tasks within this
2152 cgroup. The actual list of CPUs to be granted, however, is
2153 subjected to constraints imposed by its parent and can differ
2154 from the requested CPUs.
2156 The CPU numbers are comma-separated numbers or ranges.
2162 An empty value indicates that the cgroup is using the same
2163 setting as the nearest cgroup ancestor with a non-empty
2164 "cpuset.cpus" or all the available CPUs if none is found.
2166 The value of "cpuset.cpus" stays constant until the next update
2167 and won't be affected by any CPU hotplug events.
2169 cpuset.cpus.effective
2170 A read-only multiple values file which exists on all
2171 cpuset-enabled cgroups.
2173 It lists the onlined CPUs that are actually granted to this
2174 cgroup by its parent. These CPUs are allowed to be used by
2175 tasks within the current cgroup.
2177 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2178 all the CPUs from the parent cgroup that can be available to
2179 be used by this cgroup. Otherwise, it should be a subset of
2180 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2181 can be granted. In this case, it will be treated just like an
2182 empty "cpuset.cpus".
2184 Its value will be affected by CPU hotplug events.
2187 A read-write multiple values file which exists on non-root
2188 cpuset-enabled cgroups.
2190 It lists the requested memory nodes to be used by tasks within
2191 this cgroup. The actual list of memory nodes granted, however,
2192 is subjected to constraints imposed by its parent and can differ
2193 from the requested memory nodes.
2195 The memory node numbers are comma-separated numbers or ranges.
2201 An empty value indicates that the cgroup is using the same
2202 setting as the nearest cgroup ancestor with a non-empty
2203 "cpuset.mems" or all the available memory nodes if none
2206 The value of "cpuset.mems" stays constant until the next update
2207 and won't be affected by any memory nodes hotplug events.
2209 Setting a non-empty value to "cpuset.mems" causes memory of
2210 tasks within the cgroup to be migrated to the designated nodes if
2211 they are currently using memory outside of the designated nodes.
2213 There is a cost for this memory migration. The migration
2214 may not be complete and some memory pages may be left behind.
2215 So it is recommended that "cpuset.mems" should be set properly
2216 before spawning new tasks into the cpuset. Even if there is
2217 a need to change "cpuset.mems" with active tasks, it shouldn't
2220 cpuset.mems.effective
2221 A read-only multiple values file which exists on all
2222 cpuset-enabled cgroups.
2224 It lists the onlined memory nodes that are actually granted to
2225 this cgroup by its parent. These memory nodes are allowed to
2226 be used by tasks within the current cgroup.
2228 If "cpuset.mems" is empty, it shows all the memory nodes from the
2229 parent cgroup that will be available to be used by this cgroup.
2230 Otherwise, it should be a subset of "cpuset.mems" unless none of
2231 the memory nodes listed in "cpuset.mems" can be granted. In this
2232 case, it will be treated just like an empty "cpuset.mems".
2234 Its value will be affected by memory nodes hotplug events.
2236 cpuset.cpus.exclusive
2237 A read-write multiple values file which exists on non-root
2238 cpuset-enabled cgroups.
2240 It lists all the exclusive CPUs that are allowed to be used
2241 to create a new cpuset partition. Its value is not used
2242 unless the cgroup becomes a valid partition root. See the
2243 "cpuset.cpus.partition" section below for a description of what
2244 a cpuset partition is.
2246 When the cgroup becomes a partition root, the actual exclusive
2247 CPUs that are allocated to that partition are listed in
2248 "cpuset.cpus.exclusive.effective" which may be different
2249 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
2250 has previously been set, "cpuset.cpus.exclusive.effective"
2251 is always a subset of it.
2253 Users can manually set it to a value that is different from
2254 "cpuset.cpus". The only constraint in setting it is that the
2255 list of CPUs must be exclusive with respect to its sibling.
2257 For a parent cgroup, any one of its exclusive CPUs can only
2258 be distributed to at most one of its child cgroups. Having an
2259 exclusive CPU appearing in two or more of its child cgroups is
2260 not allowed (the exclusivity rule). A value that violates the
2261 exclusivity rule will be rejected with a write error.
2263 The root cgroup is a partition root and all its available CPUs
2264 are in its exclusive CPU set.
2266 cpuset.cpus.exclusive.effective
2267 A read-only multiple values file which exists on all non-root
2268 cpuset-enabled cgroups.
2270 This file shows the effective set of exclusive CPUs that
2271 can be used to create a partition root. The content of this
2272 file will always be a subset of "cpuset.cpus" and its parent's
2273 "cpuset.cpus.exclusive.effective" if its parent is not the root
2274 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2275 if it is set. If "cpuset.cpus.exclusive" is not set, it is
2276 treated to have an implicit value of "cpuset.cpus" in the
2277 formation of local partition.
2279 cpuset.cpus.partition
2280 A read-write single value file which exists on non-root
2281 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2282 and is not delegatable.
2284 It accepts only the following input values when written to.
2286 ========== =====================================
2287 "member" Non-root member of a partition
2288 "root" Partition root
2289 "isolated" Partition root without load balancing
2290 ========== =====================================
2292 A cpuset partition is a collection of cpuset-enabled cgroups with
2293 a partition root at the top of the hierarchy and its descendants
2294 except those that are separate partition roots themselves and
2295 their descendants. A partition has exclusive access to the
2296 set of exclusive CPUs allocated to it. Other cgroups outside
2297 of that partition cannot use any CPUs in that set.
2299 There are two types of partitions - local and remote. A local
2300 partition is one whose parent cgroup is also a valid partition
2301 root. A remote partition is one whose parent cgroup is not a
2302 valid partition root itself. Writing to "cpuset.cpus.exclusive"
2303 is optional for the creation of a local partition as its
2304 "cpuset.cpus.exclusive" file will assume an implicit value that
2305 is the same as "cpuset.cpus" if it is not set. Writing the
2306 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2307 before the target partition root is mandatory for the creation
2308 of a remote partition.
2310 Currently, a remote partition cannot be created under a local
2311 partition. All the ancestors of a remote partition root except
2312 the root cgroup cannot be a partition root.
2314 The root cgroup is always a partition root and its state cannot
2315 be changed. All other non-root cgroups start out as "member".
2317 When set to "root", the current cgroup is the root of a new
2318 partition or scheduling domain. The set of exclusive CPUs is
2319 determined by the value of its "cpuset.cpus.exclusive.effective".
2321 When set to "isolated", the CPUs in that partition will
2322 be in an isolated state without any load balancing from the
2323 scheduler. Tasks placed in such a partition with multiple
2324 CPUs should be carefully distributed and bound to each of the
2325 individual CPUs for optimal performance.
2327 A partition root ("root" or "isolated") can be in one of the
2328 two possible states - valid or invalid. An invalid partition
2329 root is in a degraded state where some state information may
2330 be retained, but behaves more like a "member".
2332 All possible state transitions among "member", "root" and
2333 "isolated" are allowed.
2335 On read, the "cpuset.cpus.partition" file can show the following
2338 ============================= =====================================
2339 "member" Non-root member of a partition
2340 "root" Partition root
2341 "isolated" Partition root without load balancing
2342 "root invalid (<reason>)" Invalid partition root
2343 "isolated invalid (<reason>)" Invalid isolated partition root
2344 ============================= =====================================
2346 In the case of an invalid partition root, a descriptive string on
2347 why the partition is invalid is included within parentheses.
2349 For a local partition root to be valid, the following conditions
2352 1) The parent cgroup is a valid partition root.
2353 2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2354 though it may contain offline CPUs.
2355 3) The "cpuset.cpus.effective" cannot be empty unless there is
2356 no task associated with this partition.
2358 For a remote partition root to be valid, all the above conditions
2359 except the first one must be met.
2361 External events like hotplug or changes to "cpuset.cpus" or
2362 "cpuset.cpus.exclusive" can cause a valid partition root to
2363 become invalid and vice versa. Note that a task cannot be
2364 moved to a cgroup with empty "cpuset.cpus.effective".
2366 A valid non-root parent partition may distribute out all its CPUs
2367 to its child local partitions when there is no task associated
2370 Care must be taken to change a valid partition root to "member"
2371 as all its child local partitions, if present, will become
2372 invalid causing disruption to tasks running in those child
2373 partitions. These inactivated partitions could be recovered if
2374 their parent is switched back to a partition root with a proper
2375 value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2377 Poll and inotify events are triggered whenever the state of
2378 "cpuset.cpus.partition" changes. That includes changes caused
2379 by write to "cpuset.cpus.partition", cpu hotplug or other
2380 changes that modify the validity status of the partition.
2381 This will allow user space agents to monitor unexpected changes
2382 to "cpuset.cpus.partition" without the need to do continuous
2385 A user can pre-configure certain CPUs to an isolated state
2386 with load balancing disabled at boot time with the "isolcpus"
2387 kernel boot command line option. If those CPUs are to be put
2388 into a partition, they have to be used in an isolated partition.
2394 Device controller manages access to device files. It includes both
2395 creation of new device files (using mknod), and access to the
2396 existing device files.
2398 Cgroup v2 device controller has no interface files and is implemented
2399 on top of cgroup BPF. To control access to device files, a user may
2400 create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2401 them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2402 device file, corresponding BPF programs will be executed, and depending
2403 on the return value the attempt will succeed or fail with -EPERM.
2405 A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2406 bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2407 access type (mknod/read/write) and device (type, major and minor numbers).
2408 If the program returns 0, the attempt fails with -EPERM, otherwise it
2411 An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2412 tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2418 The "rdma" controller regulates the distribution and accounting of
2421 RDMA Interface Files
2422 ~~~~~~~~~~~~~~~~~~~~
2425 A readwrite nested-keyed file that exists for all the cgroups
2426 except root that describes current configured resource limit
2427 for a RDMA/IB device.
2429 Lines are keyed by device name and are not ordered.
2430 Each line contains space separated resource name and its configured
2431 limit that can be distributed.
2433 The following nested keys are defined.
2435 ========== =============================
2436 hca_handle Maximum number of HCA Handles
2437 hca_object Maximum number of HCA Objects
2438 ========== =============================
2440 An example for mlx4 and ocrdma device follows::
2442 mlx4_0 hca_handle=2 hca_object=2000
2443 ocrdma1 hca_handle=3 hca_object=max
2446 A read-only file that describes current resource usage.
2447 It exists for all the cgroup except root.
2449 An example for mlx4 and ocrdma device follows::
2451 mlx4_0 hca_handle=1 hca_object=20
2452 ocrdma1 hca_handle=1 hca_object=23
2457 The HugeTLB controller allows to limit the HugeTLB usage per control group and
2458 enforces the controller limit during page fault.
2460 HugeTLB Interface Files
2461 ~~~~~~~~~~~~~~~~~~~~~~~
2463 hugetlb.<hugepagesize>.current
2464 Show current usage for "hugepagesize" hugetlb. It exists for all
2465 the cgroup except root.
2467 hugetlb.<hugepagesize>.max
2468 Set/show the hard limit of "hugepagesize" hugetlb usage.
2469 The default value is "max". It exists for all the cgroup except root.
2471 hugetlb.<hugepagesize>.events
2472 A read-only flat-keyed file which exists on non-root cgroups.
2475 The number of allocation failure due to HugeTLB limit
2477 hugetlb.<hugepagesize>.events.local
2478 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2479 are local to the cgroup i.e. not hierarchical. The file modified event
2480 generated on this file reflects only the local events.
2482 hugetlb.<hugepagesize>.numa_stat
2483 Similar to memory.numa_stat, it shows the numa information of the
2484 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2485 use hugetlb pages are included. The per-node values are in bytes.
2490 The Miscellaneous cgroup provides the resource limiting and tracking
2491 mechanism for the scalar resources which cannot be abstracted like the other
2492 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2495 A resource can be added to the controller via enum misc_res_type{} in the
2496 include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2497 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2498 capacity prior to using the resource by calling misc_cg_set_capacity().
2500 Once a capacity is set then the resource usage can be updated using charge and
2501 uncharge APIs. All of the APIs to interact with misc controller are in
2502 include/linux/misc_cgroup.h.
2504 Misc Interface Files
2505 ~~~~~~~~~~~~~~~~~~~~
2507 Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2510 A read-only flat-keyed file shown only in the root cgroup. It shows
2511 miscellaneous scalar resources available on the platform along with
2519 A read-only flat-keyed file shown in the all cgroups. It shows
2520 the current usage of the resources in the cgroup and its children.::
2527 A read-write flat-keyed file shown in the non root cgroups. Allowed
2528 maximum usage of the resources in the cgroup and its children.::
2534 Limit can be set by::
2536 # echo res_a 1 > misc.max
2538 Limit can be set to max by::
2540 # echo res_a max > misc.max
2542 Limits can be set higher than the capacity value in the misc.capacity
2546 A read-only flat-keyed file which exists on non-root cgroups. The
2547 following entries are defined. Unless specified otherwise, a value
2548 change in this file generates a file modified event. All fields in
2549 this file are hierarchical.
2552 The number of times the cgroup's resource usage was
2553 about to go over the max boundary.
2555 Migration and Ownership
2556 ~~~~~~~~~~~~~~~~~~~~~~~
2558 A miscellaneous scalar resource is charged to the cgroup in which it is used
2559 first, and stays charged to that cgroup until that resource is freed. Migrating
2560 a process to a different cgroup does not move the charge to the destination
2561 cgroup where the process has moved.
2569 perf_event controller, if not mounted on a legacy hierarchy, is
2570 automatically enabled on the v2 hierarchy so that perf events can
2571 always be filtered by cgroup v2 path. The controller can still be
2572 moved to a legacy hierarchy after v2 hierarchy is populated.
2575 Non-normative information
2576 -------------------------
2578 This section contains information that isn't considered to be a part of
2579 the stable kernel API and so is subject to change.
2582 CPU controller root cgroup process behaviour
2583 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2585 When distributing CPU cycles in the root cgroup each thread in this
2586 cgroup is treated as if it was hosted in a separate child cgroup of the
2587 root cgroup. This child cgroup weight is dependent on its thread nice
2590 For details of this mapping see sched_prio_to_weight array in
2591 kernel/sched/core.c file (values from this array should be scaled
2592 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2595 IO controller root cgroup process behaviour
2596 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2598 Root cgroup processes are hosted in an implicit leaf child node.
2599 When distributing IO resources this implicit child node is taken into
2600 account as if it was a normal child cgroup of the root cgroup with a
2601 weight value of 200.
2610 cgroup namespace provides a mechanism to virtualize the view of the
2611 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2612 flag can be used with clone(2) and unshare(2) to create a new cgroup
2613 namespace. The process running inside the cgroup namespace will have
2614 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2615 cgroupns root is the cgroup of the process at the time of creation of
2616 the cgroup namespace.
2618 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2619 complete path of the cgroup of a process. In a container setup where
2620 a set of cgroups and namespaces are intended to isolate processes the
2621 "/proc/$PID/cgroup" file may leak potential system level information
2622 to the isolated processes. For example::
2624 # cat /proc/self/cgroup
2625 0::/batchjobs/container_id1
2627 The path '/batchjobs/container_id1' can be considered as system-data
2628 and undesirable to expose to the isolated processes. cgroup namespace
2629 can be used to restrict visibility of this path. For example, before
2630 creating a cgroup namespace, one would see::
2632 # ls -l /proc/self/ns/cgroup
2633 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2634 # cat /proc/self/cgroup
2635 0::/batchjobs/container_id1
2637 After unsharing a new namespace, the view changes::
2639 # ls -l /proc/self/ns/cgroup
2640 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2641 # cat /proc/self/cgroup
2644 When some thread from a multi-threaded process unshares its cgroup
2645 namespace, the new cgroupns gets applied to the entire process (all
2646 the threads). This is natural for the v2 hierarchy; however, for the
2647 legacy hierarchies, this may be unexpected.
2649 A cgroup namespace is alive as long as there are processes inside or
2650 mounts pinning it. When the last usage goes away, the cgroup
2651 namespace is destroyed. The cgroupns root and the actual cgroups
2658 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2659 process calling unshare(2) is running. For example, if a process in
2660 /batchjobs/container_id1 cgroup calls unshare, cgroup
2661 /batchjobs/container_id1 becomes the cgroupns root. For the
2662 init_cgroup_ns, this is the real root ('/') cgroup.
2664 The cgroupns root cgroup does not change even if the namespace creator
2665 process later moves to a different cgroup::
2667 # ~/unshare -c # unshare cgroupns in some cgroup
2668 # cat /proc/self/cgroup
2671 # echo 0 > sub_cgrp_1/cgroup.procs
2672 # cat /proc/self/cgroup
2675 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2677 Processes running inside the cgroup namespace will be able to see
2678 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2679 From within an unshared cgroupns::
2683 # echo 7353 > sub_cgrp_1/cgroup.procs
2684 # cat /proc/7353/cgroup
2687 From the initial cgroup namespace, the real cgroup path will be
2690 $ cat /proc/7353/cgroup
2691 0::/batchjobs/container_id1/sub_cgrp_1
2693 From a sibling cgroup namespace (that is, a namespace rooted at a
2694 different cgroup), the cgroup path relative to its own cgroup
2695 namespace root will be shown. For instance, if PID 7353's cgroup
2696 namespace root is at '/batchjobs/container_id2', then it will see::
2698 # cat /proc/7353/cgroup
2699 0::/../container_id2/sub_cgrp_1
2701 Note that the relative path always starts with '/' to indicate that
2702 its relative to the cgroup namespace root of the caller.
2705 Migration and setns(2)
2706 ----------------------
2708 Processes inside a cgroup namespace can move into and out of the
2709 namespace root if they have proper access to external cgroups. For
2710 example, from inside a namespace with cgroupns root at
2711 /batchjobs/container_id1, and assuming that the global hierarchy is
2712 still accessible inside cgroupns::
2714 # cat /proc/7353/cgroup
2716 # echo 7353 > batchjobs/container_id2/cgroup.procs
2717 # cat /proc/7353/cgroup
2718 0::/../container_id2
2720 Note that this kind of setup is not encouraged. A task inside cgroup
2721 namespace should only be exposed to its own cgroupns hierarchy.
2723 setns(2) to another cgroup namespace is allowed when:
2725 (a) the process has CAP_SYS_ADMIN against its current user namespace
2726 (b) the process has CAP_SYS_ADMIN against the target cgroup
2729 No implicit cgroup changes happen with attaching to another cgroup
2730 namespace. It is expected that the someone moves the attaching
2731 process under the target cgroup namespace root.
2734 Interaction with Other Namespaces
2735 ---------------------------------
2737 Namespace specific cgroup hierarchy can be mounted by a process
2738 running inside a non-init cgroup namespace::
2740 # mount -t cgroup2 none $MOUNT_POINT
2742 This will mount the unified cgroup hierarchy with cgroupns root as the
2743 filesystem root. The process needs CAP_SYS_ADMIN against its user and
2746 The virtualization of /proc/self/cgroup file combined with restricting
2747 the view of cgroup hierarchy by namespace-private cgroupfs mount
2748 provides a properly isolated cgroup view inside the container.
2751 Information on Kernel Programming
2752 =================================
2754 This section contains kernel programming information in the areas
2755 where interacting with cgroup is necessary. cgroup core and
2756 controllers are not covered.
2759 Filesystem Support for Writeback
2760 --------------------------------
2762 A filesystem can support cgroup writeback by updating
2763 address_space_operations->writepage[s]() to annotate bio's using the
2764 following two functions.
2766 wbc_init_bio(@wbc, @bio)
2767 Should be called for each bio carrying writeback data and
2768 associates the bio with the inode's owner cgroup and the
2769 corresponding request queue. This must be called after
2770 a queue (device) has been associated with the bio and
2773 wbc_account_cgroup_owner(@wbc, @page, @bytes)
2774 Should be called for each data segment being written out.
2775 While this function doesn't care exactly when it's called
2776 during the writeback session, it's the easiest and most
2777 natural to call it as data segments are added to a bio.
2779 With writeback bio's annotated, cgroup support can be enabled per
2780 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2781 selective disabling of cgroup writeback support which is helpful when
2782 certain filesystem features, e.g. journaled data mode, are
2785 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2786 the configuration, the bio may be executed at a lower priority and if
2787 the writeback session is holding shared resources, e.g. a journal
2788 entry, may lead to priority inversion. There is no one easy solution
2789 for the problem. Filesystems can try to work around specific problem
2790 cases by skipping wbc_init_bio() and using bio_associate_blkg()
2794 Deprecated v1 Core Features
2795 ===========================
2797 - Multiple hierarchies including named ones are not supported.
2799 - All v1 mount options are not supported.
2801 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2803 - "cgroup.clone_children" is removed.
2805 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2806 at the root instead.
2809 Issues with v1 and Rationales for v2
2810 ====================================
2812 Multiple Hierarchies
2813 --------------------
2815 cgroup v1 allowed an arbitrary number of hierarchies and each
2816 hierarchy could host any number of controllers. While this seemed to
2817 provide a high level of flexibility, it wasn't useful in practice.
2819 For example, as there is only one instance of each controller, utility
2820 type controllers such as freezer which can be useful in all
2821 hierarchies could only be used in one. The issue is exacerbated by
2822 the fact that controllers couldn't be moved to another hierarchy once
2823 hierarchies were populated. Another issue was that all controllers
2824 bound to a hierarchy were forced to have exactly the same view of the
2825 hierarchy. It wasn't possible to vary the granularity depending on
2826 the specific controller.
2828 In practice, these issues heavily limited which controllers could be
2829 put on the same hierarchy and most configurations resorted to putting
2830 each controller on its own hierarchy. Only closely related ones, such
2831 as the cpu and cpuacct controllers, made sense to be put on the same
2832 hierarchy. This often meant that userland ended up managing multiple
2833 similar hierarchies repeating the same steps on each hierarchy
2834 whenever a hierarchy management operation was necessary.
2836 Furthermore, support for multiple hierarchies came at a steep cost.
2837 It greatly complicated cgroup core implementation but more importantly
2838 the support for multiple hierarchies restricted how cgroup could be
2839 used in general and what controllers was able to do.
2841 There was no limit on how many hierarchies there might be, which meant
2842 that a thread's cgroup membership couldn't be described in finite
2843 length. The key might contain any number of entries and was unlimited
2844 in length, which made it highly awkward to manipulate and led to
2845 addition of controllers which existed only to identify membership,
2846 which in turn exacerbated the original problem of proliferating number
2849 Also, as a controller couldn't have any expectation regarding the
2850 topologies of hierarchies other controllers might be on, each
2851 controller had to assume that all other controllers were attached to
2852 completely orthogonal hierarchies. This made it impossible, or at
2853 least very cumbersome, for controllers to cooperate with each other.
2855 In most use cases, putting controllers on hierarchies which are
2856 completely orthogonal to each other isn't necessary. What usually is
2857 called for is the ability to have differing levels of granularity
2858 depending on the specific controller. In other words, hierarchy may
2859 be collapsed from leaf towards root when viewed from specific
2860 controllers. For example, a given configuration might not care about
2861 how memory is distributed beyond a certain level while still wanting
2862 to control how CPU cycles are distributed.
2868 cgroup v1 allowed threads of a process to belong to different cgroups.
2869 This didn't make sense for some controllers and those controllers
2870 ended up implementing different ways to ignore such situations but
2871 much more importantly it blurred the line between API exposed to
2872 individual applications and system management interface.
2874 Generally, in-process knowledge is available only to the process
2875 itself; thus, unlike service-level organization of processes,
2876 categorizing threads of a process requires active participation from
2877 the application which owns the target process.
2879 cgroup v1 had an ambiguously defined delegation model which got abused
2880 in combination with thread granularity. cgroups were delegated to
2881 individual applications so that they can create and manage their own
2882 sub-hierarchies and control resource distributions along them. This
2883 effectively raised cgroup to the status of a syscall-like API exposed
2886 First of all, cgroup has a fundamentally inadequate interface to be
2887 exposed this way. For a process to access its own knobs, it has to
2888 extract the path on the target hierarchy from /proc/self/cgroup,
2889 construct the path by appending the name of the knob to the path, open
2890 and then read and/or write to it. This is not only extremely clunky
2891 and unusual but also inherently racy. There is no conventional way to
2892 define transaction across the required steps and nothing can guarantee
2893 that the process would actually be operating on its own sub-hierarchy.
2895 cgroup controllers implemented a number of knobs which would never be
2896 accepted as public APIs because they were just adding control knobs to
2897 system-management pseudo filesystem. cgroup ended up with interface
2898 knobs which were not properly abstracted or refined and directly
2899 revealed kernel internal details. These knobs got exposed to
2900 individual applications through the ill-defined delegation mechanism
2901 effectively abusing cgroup as a shortcut to implementing public APIs
2902 without going through the required scrutiny.
2904 This was painful for both userland and kernel. Userland ended up with
2905 misbehaving and poorly abstracted interfaces and kernel exposing and
2906 locked into constructs inadvertently.
2909 Competition Between Inner Nodes and Threads
2910 -------------------------------------------
2912 cgroup v1 allowed threads to be in any cgroups which created an
2913 interesting problem where threads belonging to a parent cgroup and its
2914 children cgroups competed for resources. This was nasty as two
2915 different types of entities competed and there was no obvious way to
2916 settle it. Different controllers did different things.
2918 The cpu controller considered threads and cgroups as equivalents and
2919 mapped nice levels to cgroup weights. This worked for some cases but
2920 fell flat when children wanted to be allocated specific ratios of CPU
2921 cycles and the number of internal threads fluctuated - the ratios
2922 constantly changed as the number of competing entities fluctuated.
2923 There also were other issues. The mapping from nice level to weight
2924 wasn't obvious or universal, and there were various other knobs which
2925 simply weren't available for threads.
2927 The io controller implicitly created a hidden leaf node for each
2928 cgroup to host the threads. The hidden leaf had its own copies of all
2929 the knobs with ``leaf_`` prefixed. While this allowed equivalent
2930 control over internal threads, it was with serious drawbacks. It
2931 always added an extra layer of nesting which wouldn't be necessary
2932 otherwise, made the interface messy and significantly complicated the
2935 The memory controller didn't have a way to control what happened
2936 between internal tasks and child cgroups and the behavior was not
2937 clearly defined. There were attempts to add ad-hoc behaviors and
2938 knobs to tailor the behavior to specific workloads which would have
2939 led to problems extremely difficult to resolve in the long term.
2941 Multiple controllers struggled with internal tasks and came up with
2942 different ways to deal with it; unfortunately, all the approaches were
2943 severely flawed and, furthermore, the widely different behaviors
2944 made cgroup as a whole highly inconsistent.
2946 This clearly is a problem which needs to be addressed from cgroup core
2950 Other Interface Issues
2951 ----------------------
2953 cgroup v1 grew without oversight and developed a large number of
2954 idiosyncrasies and inconsistencies. One issue on the cgroup core side
2955 was how an empty cgroup was notified - a userland helper binary was
2956 forked and executed for each event. The event delivery wasn't
2957 recursive or delegatable. The limitations of the mechanism also led
2958 to in-kernel event delivery filtering mechanism further complicating
2961 Controller interfaces were problematic too. An extreme example is
2962 controllers completely ignoring hierarchical organization and treating
2963 all cgroups as if they were all located directly under the root
2964 cgroup. Some controllers exposed a large amount of inconsistent
2965 implementation details to userland.
2967 There also was no consistency across controllers. When a new cgroup
2968 was created, some controllers defaulted to not imposing extra
2969 restrictions while others disallowed any resource usage until
2970 explicitly configured. Configuration knobs for the same type of
2971 control used widely differing naming schemes and formats. Statistics
2972 and information knobs were named arbitrarily and used different
2973 formats and units even in the same controller.
2975 cgroup v2 establishes common conventions where appropriate and updates
2976 controllers so that they expose minimal and consistent interfaces.
2979 Controller Issues and Remedies
2980 ------------------------------
2985 The original lower boundary, the soft limit, is defined as a limit
2986 that is per default unset. As a result, the set of cgroups that
2987 global reclaim prefers is opt-in, rather than opt-out. The costs for
2988 optimizing these mostly negative lookups are so high that the
2989 implementation, despite its enormous size, does not even provide the
2990 basic desirable behavior. First off, the soft limit has no
2991 hierarchical meaning. All configured groups are organized in a global
2992 rbtree and treated like equal peers, regardless where they are located
2993 in the hierarchy. This makes subtree delegation impossible. Second,
2994 the soft limit reclaim pass is so aggressive that it not just
2995 introduces high allocation latencies into the system, but also impacts
2996 system performance due to overreclaim, to the point where the feature
2997 becomes self-defeating.
2999 The memory.low boundary on the other hand is a top-down allocated
3000 reserve. A cgroup enjoys reclaim protection when it's within its
3001 effective low, which makes delegation of subtrees possible. It also
3002 enjoys having reclaim pressure proportional to its overage when
3003 above its effective low.
3005 The original high boundary, the hard limit, is defined as a strict
3006 limit that can not budge, even if the OOM killer has to be called.
3007 But this generally goes against the goal of making the most out of the
3008 available memory. The memory consumption of workloads varies during
3009 runtime, and that requires users to overcommit. But doing that with a
3010 strict upper limit requires either a fairly accurate prediction of the
3011 working set size or adding slack to the limit. Since working set size
3012 estimation is hard and error prone, and getting it wrong results in
3013 OOM kills, most users tend to err on the side of a looser limit and
3014 end up wasting precious resources.
3016 The memory.high boundary on the other hand can be set much more
3017 conservatively. When hit, it throttles allocations by forcing them
3018 into direct reclaim to work off the excess, but it never invokes the
3019 OOM killer. As a result, a high boundary that is chosen too
3020 aggressively will not terminate the processes, but instead it will
3021 lead to gradual performance degradation. The user can monitor this
3022 and make corrections until the minimal memory footprint that still
3023 gives acceptable performance is found.
3025 In extreme cases, with many concurrent allocations and a complete
3026 breakdown of reclaim progress within the group, the high boundary can
3027 be exceeded. But even then it's mostly better to satisfy the
3028 allocation from the slack available in other groups or the rest of the
3029 system than killing the group. Otherwise, memory.max is there to
3030 limit this type of spillover and ultimately contain buggy or even
3031 malicious applications.
3033 Setting the original memory.limit_in_bytes below the current usage was
3034 subject to a race condition, where concurrent charges could cause the
3035 limit setting to fail. memory.max on the other hand will first set the
3036 limit to prevent new charges, and then reclaim and OOM kill until the
3037 new limit is met - or the task writing to memory.max is killed.
3039 The combined memory+swap accounting and limiting is replaced by real
3040 control over swap space.
3042 The main argument for a combined memory+swap facility in the original
3043 cgroup design was that global or parental pressure would always be
3044 able to swap all anonymous memory of a child group, regardless of the
3045 child's own (possibly untrusted) configuration. However, untrusted
3046 groups can sabotage swapping by other means - such as referencing its
3047 anonymous memory in a tight loop - and an admin can not assume full
3048 swappability when overcommitting untrusted jobs.
3050 For trusted jobs, on the other hand, a combined counter is not an
3051 intuitive userspace interface, and it flies in the face of the idea
3052 that cgroup controllers should account and limit specific physical
3053 resources. Swap space is a resource like all others in the system,
3054 and that's why unified hierarchy allows distributing it separately.