8 :Author: Tejun Heo <tj@kernel.org>
10 This is the authoritative documentation on the design, interface and
11 conventions of cgroup v2. It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
13 future changes must be reflected in this document. Documentation for
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
23 2-2. Organizing Processes and Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
45 4-3. Core Interface Files
48 5-1-1. CPU Interface Files
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
54 5-3-1. IO Interface Files
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
61 5-4-1. PID Interface Files
63 5.5-1. Cpuset Interface Files
66 5-7-1. RDMA Interface Files
68 5.8-1. HugeTLB Interface Files
70 5.9-1 Miscellaneous cgroup Interface Files
71 5.9-2 Migration and Ownership
74 5-N. Non-normative information
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
79 6-2. The Root and Views
80 6-3. Migration and setns(2)
81 6-4. Interaction with Other Namespaces
82 P. Information on Kernel Programming
83 P-1. Filesystem Support for Writeback
84 D. Deprecated v1 Core Features
85 R. Issues with v1 and Rationales for v2
86 R-1. Multiple Hierarchies
87 R-2. Thread Granularity
88 R-3. Competition Between Inner Nodes and Threads
89 R-4. Other Interface Issues
90 R-5. Controller Issues and Remedies
100 "cgroup" stands for "control group" and is never capitalized. The
101 singular form is used to designate the whole feature and also as a
102 qualifier as in "cgroup controllers". When explicitly referring to
103 multiple individual control groups, the plural form "cgroups" is used.
109 cgroup is a mechanism to organize processes hierarchically and
110 distribute system resources along the hierarchy in a controlled and
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes. A cgroup controller is usually responsible for
116 distributing a specific type of system resource along the hierarchy
117 although there are utility controllers which serve purposes other than
118 resource distribution.
120 cgroups form a tree structure and every process in the system belongs
121 to one and only one cgroup. All threads of a process belong to the
122 same cgroup. On creation, all processes are put in the cgroup that
123 the parent process belongs to at the time. A process can be migrated
124 to another cgroup. Migration of a process doesn't affect already
125 existing descendant processes.
127 Following certain structural constraints, controllers may be enabled or
128 disabled selectively on a cgroup. All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
130 processes which belong to the cgroups consisting the inclusive
131 sub-hierarchy of the cgroup. When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further. The
133 restrictions set closer to the root in the hierarchy can not be
134 overridden from further away.
143 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
144 hierarchy can be mounted with the following mount command::
146 # mount -t cgroup2 none $MOUNT_POINT
148 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
149 controllers which support v2 and are not bound to a v1 hierarchy are
150 automatically bound to the v2 hierarchy and show up at the root.
151 Controllers which are not in active use in the v2 hierarchy can be
152 bound to other hierarchies. This allows mixing v2 hierarchy with the
153 legacy v1 multiple hierarchies in a fully backward compatible way.
155 A controller can be moved across hierarchies only after the controller
156 is no longer referenced in its current hierarchy. Because per-cgroup
157 controller states are destroyed asynchronously and controllers may
158 have lingering references, a controller may not show up immediately on
159 the v2 hierarchy after the final umount of the previous hierarchy.
160 Similarly, a controller should be fully disabled to be moved out of
161 the unified hierarchy and it may take some time for the disabled
162 controller to become available for other hierarchies; furthermore, due
163 to inter-controller dependencies, other controllers may need to be
166 While useful for development and manual configurations, moving
167 controllers dynamically between the v2 and other hierarchies is
168 strongly discouraged for production use. It is recommended to decide
169 the hierarchies and controller associations before starting using the
170 controllers after system boot.
172 During transition to v2, system management software might still
173 automount the v1 cgroup filesystem and so hijack all controllers
174 during boot, before manual intervention is possible. To make testing
175 and experimenting easier, the kernel parameter cgroup_no_v1= allows
176 disabling controllers in v1 and make them always available in v2.
178 cgroup v2 currently supports the following mount options.
181 Consider cgroup namespaces as delegation boundaries. This
182 option is system wide and can only be set on mount or modified
183 through remount from the init namespace. The mount option is
184 ignored on non-init namespace mounts. Please refer to the
185 Delegation section for details.
188 Reduce the latencies of dynamic cgroup modifications such as
189 task migrations and controller on/offs at the cost of making
190 hot path operations such as forks and exits more expensive.
191 The static usage pattern of creating a cgroup, enabling
192 controllers, and then seeding it with CLONE_INTO_CGROUP is
193 not affected by this option.
196 Only populate memory.events with data for the current cgroup,
197 and not any subtrees. This is legacy behaviour, the default
198 behaviour without this option is to include subtree counts.
199 This option is system wide and can only be set on mount or
200 modified through remount from the init namespace. The mount
201 option is ignored on non-init namespace mounts.
204 Recursively apply memory.min and memory.low protection to
205 entire subtrees, without requiring explicit downward
206 propagation into leaf cgroups. This allows protecting entire
207 subtrees from one another, while retaining free competition
208 within those subtrees. This should have been the default
209 behavior but is a mount-option to avoid regressing setups
210 relying on the original semantics (e.g. specifying bogusly
211 high 'bypass' protection values at higher tree levels).
213 memory_hugetlb_accounting
214 Count HugeTLB memory usage towards the cgroup's overall
215 memory usage for the memory controller (for the purpose of
216 statistics reporting and memory protetion). This is a new
217 behavior that could regress existing setups, so it must be
218 explicitly opted in with this mount option.
220 A few caveats to keep in mind:
222 * There is no HugeTLB pool management involved in the memory
223 controller. The pre-allocated pool does not belong to anyone.
224 Specifically, when a new HugeTLB folio is allocated to
225 the pool, it is not accounted for from the perspective of the
226 memory controller. It is only charged to a cgroup when it is
227 actually used (for e.g at page fault time). Host memory
228 overcommit management has to consider this when configuring
229 hard limits. In general, HugeTLB pool management should be
230 done via other mechanisms (such as the HugeTLB controller).
231 * Failure to charge a HugeTLB folio to the memory controller
232 results in SIGBUS. This could happen even if the HugeTLB pool
233 still has pages available (but the cgroup limit is hit and
234 reclaim attempt fails).
235 * Charging HugeTLB memory towards the memory controller affects
236 memory protection and reclaim dynamics. Any userspace tuning
237 (of low, min limits for e.g) needs to take this into account.
238 * HugeTLB pages utilized while this option is not selected
239 will not be tracked by the memory controller (even if cgroup
240 v2 is remounted later on).
243 Organizing Processes and Threads
244 --------------------------------
249 Initially, only the root cgroup exists to which all processes belong.
250 A child cgroup can be created by creating a sub-directory::
254 A given cgroup may have multiple child cgroups forming a tree
255 structure. Each cgroup has a read-writable interface file
256 "cgroup.procs". When read, it lists the PIDs of all processes which
257 belong to the cgroup one-per-line. The PIDs are not ordered and the
258 same PID may show up more than once if the process got moved to
259 another cgroup and then back or the PID got recycled while reading.
261 A process can be migrated into a cgroup by writing its PID to the
262 target cgroup's "cgroup.procs" file. Only one process can be migrated
263 on a single write(2) call. If a process is composed of multiple
264 threads, writing the PID of any thread migrates all threads of the
267 When a process forks a child process, the new process is born into the
268 cgroup that the forking process belongs to at the time of the
269 operation. After exit, a process stays associated with the cgroup
270 that it belonged to at the time of exit until it's reaped; however, a
271 zombie process does not appear in "cgroup.procs" and thus can't be
272 moved to another cgroup.
274 A cgroup which doesn't have any children or live processes can be
275 destroyed by removing the directory. Note that a cgroup which doesn't
276 have any children and is associated only with zombie processes is
277 considered empty and can be removed::
281 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
282 cgroup is in use in the system, this file may contain multiple lines,
283 one for each hierarchy. The entry for cgroup v2 is always in the
286 # cat /proc/842/cgroup
288 0::/test-cgroup/test-cgroup-nested
290 If the process becomes a zombie and the cgroup it was associated with
291 is removed subsequently, " (deleted)" is appended to the path::
293 # cat /proc/842/cgroup
295 0::/test-cgroup/test-cgroup-nested (deleted)
301 cgroup v2 supports thread granularity for a subset of controllers to
302 support use cases requiring hierarchical resource distribution across
303 the threads of a group of processes. By default, all threads of a
304 process belong to the same cgroup, which also serves as the resource
305 domain to host resource consumptions which are not specific to a
306 process or thread. The thread mode allows threads to be spread across
307 a subtree while still maintaining the common resource domain for them.
309 Controllers which support thread mode are called threaded controllers.
310 The ones which don't are called domain controllers.
312 Marking a cgroup threaded makes it join the resource domain of its
313 parent as a threaded cgroup. The parent may be another threaded
314 cgroup whose resource domain is further up in the hierarchy. The root
315 of a threaded subtree, that is, the nearest ancestor which is not
316 threaded, is called threaded domain or thread root interchangeably and
317 serves as the resource domain for the entire subtree.
319 Inside a threaded subtree, threads of a process can be put in
320 different cgroups and are not subject to the no internal process
321 constraint - threaded controllers can be enabled on non-leaf cgroups
322 whether they have threads in them or not.
324 As the threaded domain cgroup hosts all the domain resource
325 consumptions of the subtree, it is considered to have internal
326 resource consumptions whether there are processes in it or not and
327 can't have populated child cgroups which aren't threaded. Because the
328 root cgroup is not subject to no internal process constraint, it can
329 serve both as a threaded domain and a parent to domain cgroups.
331 The current operation mode or type of the cgroup is shown in the
332 "cgroup.type" file which indicates whether the cgroup is a normal
333 domain, a domain which is serving as the domain of a threaded subtree,
334 or a threaded cgroup.
336 On creation, a cgroup is always a domain cgroup and can be made
337 threaded by writing "threaded" to the "cgroup.type" file. The
338 operation is single direction::
340 # echo threaded > cgroup.type
342 Once threaded, the cgroup can't be made a domain again. To enable the
343 thread mode, the following conditions must be met.
345 - As the cgroup will join the parent's resource domain. The parent
346 must either be a valid (threaded) domain or a threaded cgroup.
348 - When the parent is an unthreaded domain, it must not have any domain
349 controllers enabled or populated domain children. The root is
350 exempt from this requirement.
352 Topology-wise, a cgroup can be in an invalid state. Please consider
353 the following topology::
355 A (threaded domain) - B (threaded) - C (domain, just created)
357 C is created as a domain but isn't connected to a parent which can
358 host child domains. C can't be used until it is turned into a
359 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
360 these cases. Operations which fail due to invalid topology use
361 EOPNOTSUPP as the errno.
363 A domain cgroup is turned into a threaded domain when one of its child
364 cgroup becomes threaded or threaded controllers are enabled in the
365 "cgroup.subtree_control" file while there are processes in the cgroup.
366 A threaded domain reverts to a normal domain when the conditions
369 When read, "cgroup.threads" contains the list of the thread IDs of all
370 threads in the cgroup. Except that the operations are per-thread
371 instead of per-process, "cgroup.threads" has the same format and
372 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
373 written to in any cgroup, as it can only move threads inside the same
374 threaded domain, its operations are confined inside each threaded
377 The threaded domain cgroup serves as the resource domain for the whole
378 subtree, and, while the threads can be scattered across the subtree,
379 all the processes are considered to be in the threaded domain cgroup.
380 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
381 processes in the subtree and is not readable in the subtree proper.
382 However, "cgroup.procs" can be written to from anywhere in the subtree
383 to migrate all threads of the matching process to the cgroup.
385 Only threaded controllers can be enabled in a threaded subtree. When
386 a threaded controller is enabled inside a threaded subtree, it only
387 accounts for and controls resource consumptions associated with the
388 threads in the cgroup and its descendants. All consumptions which
389 aren't tied to a specific thread belong to the threaded domain cgroup.
391 Because a threaded subtree is exempt from no internal process
392 constraint, a threaded controller must be able to handle competition
393 between threads in a non-leaf cgroup and its child cgroups. Each
394 threaded controller defines how such competitions are handled.
396 Currently, the following controllers are threaded and can be enabled
397 in a threaded cgroup::
404 [Un]populated Notification
405 --------------------------
407 Each non-root cgroup has a "cgroup.events" file which contains
408 "populated" field indicating whether the cgroup's sub-hierarchy has
409 live processes in it. Its value is 0 if there is no live process in
410 the cgroup and its descendants; otherwise, 1. poll and [id]notify
411 events are triggered when the value changes. This can be used, for
412 example, to start a clean-up operation after all processes of a given
413 sub-hierarchy have exited. The populated state updates and
414 notifications are recursive. Consider the following sub-hierarchy
415 where the numbers in the parentheses represent the numbers of processes
421 A, B and C's "populated" fields would be 1 while D's 0. After the one
422 process in C exits, B and C's "populated" fields would flip to "0" and
423 file modified events will be generated on the "cgroup.events" files of
427 Controlling Controllers
428 -----------------------
430 Enabling and Disabling
431 ~~~~~~~~~~~~~~~~~~~~~~
433 Each cgroup has a "cgroup.controllers" file which lists all
434 controllers available for the cgroup to enable::
436 # cat cgroup.controllers
439 No controller is enabled by default. Controllers can be enabled and
440 disabled by writing to the "cgroup.subtree_control" file::
442 # echo "+cpu +memory -io" > cgroup.subtree_control
444 Only controllers which are listed in "cgroup.controllers" can be
445 enabled. When multiple operations are specified as above, either they
446 all succeed or fail. If multiple operations on the same controller
447 are specified, the last one is effective.
449 Enabling a controller in a cgroup indicates that the distribution of
450 the target resource across its immediate children will be controlled.
451 Consider the following sub-hierarchy. The enabled controllers are
452 listed in parentheses::
454 A(cpu,memory) - B(memory) - C()
457 As A has "cpu" and "memory" enabled, A will control the distribution
458 of CPU cycles and memory to its children, in this case, B. As B has
459 "memory" enabled but not "CPU", C and D will compete freely on CPU
460 cycles but their division of memory available to B will be controlled.
462 As a controller regulates the distribution of the target resource to
463 the cgroup's children, enabling it creates the controller's interface
464 files in the child cgroups. In the above example, enabling "cpu" on B
465 would create the "cpu." prefixed controller interface files in C and
466 D. Likewise, disabling "memory" from B would remove the "memory."
467 prefixed controller interface files from C and D. This means that the
468 controller interface files - anything which doesn't start with
469 "cgroup." are owned by the parent rather than the cgroup itself.
475 Resources are distributed top-down and a cgroup can further distribute
476 a resource only if the resource has been distributed to it from the
477 parent. This means that all non-root "cgroup.subtree_control" files
478 can only contain controllers which are enabled in the parent's
479 "cgroup.subtree_control" file. A controller can be enabled only if
480 the parent has the controller enabled and a controller can't be
481 disabled if one or more children have it enabled.
484 No Internal Process Constraint
485 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
487 Non-root cgroups can distribute domain resources to their children
488 only when they don't have any processes of their own. In other words,
489 only domain cgroups which don't contain any processes can have domain
490 controllers enabled in their "cgroup.subtree_control" files.
492 This guarantees that, when a domain controller is looking at the part
493 of the hierarchy which has it enabled, processes are always only on
494 the leaves. This rules out situations where child cgroups compete
495 against internal processes of the parent.
497 The root cgroup is exempt from this restriction. Root contains
498 processes and anonymous resource consumption which can't be associated
499 with any other cgroups and requires special treatment from most
500 controllers. How resource consumption in the root cgroup is governed
501 is up to each controller (for more information on this topic please
502 refer to the Non-normative information section in the Controllers
505 Note that the restriction doesn't get in the way if there is no
506 enabled controller in the cgroup's "cgroup.subtree_control". This is
507 important as otherwise it wouldn't be possible to create children of a
508 populated cgroup. To control resource distribution of a cgroup, the
509 cgroup must create children and transfer all its processes to the
510 children before enabling controllers in its "cgroup.subtree_control"
520 A cgroup can be delegated in two ways. First, to a less privileged
521 user by granting write access of the directory and its "cgroup.procs",
522 "cgroup.threads" and "cgroup.subtree_control" files to the user.
523 Second, if the "nsdelegate" mount option is set, automatically to a
524 cgroup namespace on namespace creation.
526 Because the resource control interface files in a given directory
527 control the distribution of the parent's resources, the delegatee
528 shouldn't be allowed to write to them. For the first method, this is
529 achieved by not granting access to these files. For the second, the
530 kernel rejects writes to all files other than "cgroup.procs" and
531 "cgroup.subtree_control" on a namespace root from inside the
534 The end results are equivalent for both delegation types. Once
535 delegated, the user can build sub-hierarchy under the directory,
536 organize processes inside it as it sees fit and further distribute the
537 resources it received from the parent. The limits and other settings
538 of all resource controllers are hierarchical and regardless of what
539 happens in the delegated sub-hierarchy, nothing can escape the
540 resource restrictions imposed by the parent.
542 Currently, cgroup doesn't impose any restrictions on the number of
543 cgroups in or nesting depth of a delegated sub-hierarchy; however,
544 this may be limited explicitly in the future.
547 Delegation Containment
548 ~~~~~~~~~~~~~~~~~~~~~~
550 A delegated sub-hierarchy is contained in the sense that processes
551 can't be moved into or out of the sub-hierarchy by the delegatee.
553 For delegations to a less privileged user, this is achieved by
554 requiring the following conditions for a process with a non-root euid
555 to migrate a target process into a cgroup by writing its PID to the
558 - The writer must have write access to the "cgroup.procs" file.
560 - The writer must have write access to the "cgroup.procs" file of the
561 common ancestor of the source and destination cgroups.
563 The above two constraints ensure that while a delegatee may migrate
564 processes around freely in the delegated sub-hierarchy it can't pull
565 in from or push out to outside the sub-hierarchy.
567 For an example, let's assume cgroups C0 and C1 have been delegated to
568 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
569 all processes under C0 and C1 belong to U0::
571 ~~~~~~~~~~~~~ - C0 - C00
574 ~~~~~~~~~~~~~ - C1 - C10
576 Let's also say U0 wants to write the PID of a process which is
577 currently in C10 into "C00/cgroup.procs". U0 has write access to the
578 file; however, the common ancestor of the source cgroup C10 and the
579 destination cgroup C00 is above the points of delegation and U0 would
580 not have write access to its "cgroup.procs" files and thus the write
581 will be denied with -EACCES.
583 For delegations to namespaces, containment is achieved by requiring
584 that both the source and destination cgroups are reachable from the
585 namespace of the process which is attempting the migration. If either
586 is not reachable, the migration is rejected with -ENOENT.
592 Organize Once and Control
593 ~~~~~~~~~~~~~~~~~~~~~~~~~
595 Migrating a process across cgroups is a relatively expensive operation
596 and stateful resources such as memory are not moved together with the
597 process. This is an explicit design decision as there often exist
598 inherent trade-offs between migration and various hot paths in terms
599 of synchronization cost.
601 As such, migrating processes across cgroups frequently as a means to
602 apply different resource restrictions is discouraged. A workload
603 should be assigned to a cgroup according to the system's logical and
604 resource structure once on start-up. Dynamic adjustments to resource
605 distribution can be made by changing controller configuration through
609 Avoid Name Collisions
610 ~~~~~~~~~~~~~~~~~~~~~
612 Interface files for a cgroup and its children cgroups occupy the same
613 directory and it is possible to create children cgroups which collide
614 with interface files.
616 All cgroup core interface files are prefixed with "cgroup." and each
617 controller's interface files are prefixed with the controller name and
618 a dot. A controller's name is composed of lower case alphabets and
619 '_'s but never begins with an '_' so it can be used as the prefix
620 character for collision avoidance. Also, interface file names won't
621 start or end with terms which are often used in categorizing workloads
622 such as job, service, slice, unit or workload.
624 cgroup doesn't do anything to prevent name collisions and it's the
625 user's responsibility to avoid them.
628 Resource Distribution Models
629 ============================
631 cgroup controllers implement several resource distribution schemes
632 depending on the resource type and expected use cases. This section
633 describes major schemes in use along with their expected behaviors.
639 A parent's resource is distributed by adding up the weights of all
640 active children and giving each the fraction matching the ratio of its
641 weight against the sum. As only children which can make use of the
642 resource at the moment participate in the distribution, this is
643 work-conserving. Due to the dynamic nature, this model is usually
644 used for stateless resources.
646 All weights are in the range [1, 10000] with the default at 100. This
647 allows symmetric multiplicative biases in both directions at fine
648 enough granularity while staying in the intuitive range.
650 As long as the weight is in range, all configuration combinations are
651 valid and there is no reason to reject configuration changes or
654 "cpu.weight" proportionally distributes CPU cycles to active children
655 and is an example of this type.
658 .. _cgroupv2-limits-distributor:
663 A child can only consume up to the configured amount of the resource.
664 Limits can be over-committed - the sum of the limits of children can
665 exceed the amount of resource available to the parent.
667 Limits are in the range [0, max] and defaults to "max", which is noop.
669 As limits can be over-committed, all configuration combinations are
670 valid and there is no reason to reject configuration changes or
673 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
674 on an IO device and is an example of this type.
676 .. _cgroupv2-protections-distributor:
681 A cgroup is protected up to the configured amount of the resource
682 as long as the usages of all its ancestors are under their
683 protected levels. Protections can be hard guarantees or best effort
684 soft boundaries. Protections can also be over-committed in which case
685 only up to the amount available to the parent is protected among
688 Protections are in the range [0, max] and defaults to 0, which is
691 As protections can be over-committed, all configuration combinations
692 are valid and there is no reason to reject configuration changes or
695 "memory.low" implements best-effort memory protection and is an
696 example of this type.
702 A cgroup is exclusively allocated a certain amount of a finite
703 resource. Allocations can't be over-committed - the sum of the
704 allocations of children can not exceed the amount of resource
705 available to the parent.
707 Allocations are in the range [0, max] and defaults to 0, which is no
710 As allocations can't be over-committed, some configuration
711 combinations are invalid and should be rejected. Also, if the
712 resource is mandatory for execution of processes, process migrations
715 "cpu.rt.max" hard-allocates realtime slices and is an example of this
725 All interface files should be in one of the following formats whenever
728 New-line separated values
729 (when only one value can be written at once)
735 Space separated values
736 (when read-only or multiple values can be written at once)
748 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
749 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
752 For a writable file, the format for writing should generally match
753 reading; however, controllers may allow omitting later fields or
754 implement restricted shortcuts for most common use cases.
756 For both flat and nested keyed files, only the values for a single key
757 can be written at a time. For nested keyed files, the sub key pairs
758 may be specified in any order and not all pairs have to be specified.
764 - Settings for a single feature should be contained in a single file.
766 - The root cgroup should be exempt from resource control and thus
767 shouldn't have resource control interface files.
769 - The default time unit is microseconds. If a different unit is ever
770 used, an explicit unit suffix must be present.
772 - A parts-per quantity should use a percentage decimal with at least
773 two digit fractional part - e.g. 13.40.
775 - If a controller implements weight based resource distribution, its
776 interface file should be named "weight" and have the range [1,
777 10000] with 100 as the default. The values are chosen to allow
778 enough and symmetric bias in both directions while keeping it
779 intuitive (the default is 100%).
781 - If a controller implements an absolute resource guarantee and/or
782 limit, the interface files should be named "min" and "max"
783 respectively. If a controller implements best effort resource
784 guarantee and/or limit, the interface files should be named "low"
785 and "high" respectively.
787 In the above four control files, the special token "max" should be
788 used to represent upward infinity for both reading and writing.
790 - If a setting has a configurable default value and keyed specific
791 overrides, the default entry should be keyed with "default" and
792 appear as the first entry in the file.
794 The default value can be updated by writing either "default $VAL" or
797 When writing to update a specific override, "default" can be used as
798 the value to indicate removal of the override. Override entries
799 with "default" as the value must not appear when read.
801 For example, a setting which is keyed by major:minor device numbers
802 with integer values may look like the following::
804 # cat cgroup-example-interface-file
808 The default value can be updated by::
810 # echo 125 > cgroup-example-interface-file
814 # echo "default 125" > cgroup-example-interface-file
816 An override can be set by::
818 # echo "8:16 170" > cgroup-example-interface-file
822 # echo "8:0 default" > cgroup-example-interface-file
823 # cat cgroup-example-interface-file
827 - For events which are not very high frequency, an interface file
828 "events" should be created which lists event key value pairs.
829 Whenever a notifiable event happens, file modified event should be
830 generated on the file.
836 All cgroup core files are prefixed with "cgroup."
839 A read-write single value file which exists on non-root
842 When read, it indicates the current type of the cgroup, which
843 can be one of the following values.
845 - "domain" : A normal valid domain cgroup.
847 - "domain threaded" : A threaded domain cgroup which is
848 serving as the root of a threaded subtree.
850 - "domain invalid" : A cgroup which is in an invalid state.
851 It can't be populated or have controllers enabled. It may
852 be allowed to become a threaded cgroup.
854 - "threaded" : A threaded cgroup which is a member of a
857 A cgroup can be turned into a threaded cgroup by writing
858 "threaded" to this file.
861 A read-write new-line separated values file which exists on
864 When read, it lists the PIDs of all processes which belong to
865 the cgroup one-per-line. The PIDs are not ordered and the
866 same PID may show up more than once if the process got moved
867 to another cgroup and then back or the PID got recycled while
870 A PID can be written to migrate the process associated with
871 the PID to the cgroup. The writer should match all of the
872 following conditions.
874 - It must have write access to the "cgroup.procs" file.
876 - It must have write access to the "cgroup.procs" file of the
877 common ancestor of the source and destination cgroups.
879 When delegating a sub-hierarchy, write access to this file
880 should be granted along with the containing directory.
882 In a threaded cgroup, reading this file fails with EOPNOTSUPP
883 as all the processes belong to the thread root. Writing is
884 supported and moves every thread of the process to the cgroup.
887 A read-write new-line separated values file which exists on
890 When read, it lists the TIDs of all threads which belong to
891 the cgroup one-per-line. The TIDs are not ordered and the
892 same TID may show up more than once if the thread got moved to
893 another cgroup and then back or the TID got recycled while
896 A TID can be written to migrate the thread associated with the
897 TID to the cgroup. The writer should match all of the
898 following conditions.
900 - It must have write access to the "cgroup.threads" file.
902 - The cgroup that the thread is currently in must be in the
903 same resource domain as the destination cgroup.
905 - It must have write access to the "cgroup.procs" file of the
906 common ancestor of the source and destination cgroups.
908 When delegating a sub-hierarchy, write access to this file
909 should be granted along with the containing directory.
912 A read-only space separated values file which exists on all
915 It shows space separated list of all controllers available to
916 the cgroup. The controllers are not ordered.
918 cgroup.subtree_control
919 A read-write space separated values file which exists on all
920 cgroups. Starts out empty.
922 When read, it shows space separated list of the controllers
923 which are enabled to control resource distribution from the
924 cgroup to its children.
926 Space separated list of controllers prefixed with '+' or '-'
927 can be written to enable or disable controllers. A controller
928 name prefixed with '+' enables the controller and '-'
929 disables. If a controller appears more than once on the list,
930 the last one is effective. When multiple enable and disable
931 operations are specified, either all succeed or all fail.
934 A read-only flat-keyed file which exists on non-root cgroups.
935 The following entries are defined. Unless specified
936 otherwise, a value change in this file generates a file
940 1 if the cgroup or its descendants contains any live
941 processes; otherwise, 0.
943 1 if the cgroup is frozen; otherwise, 0.
945 cgroup.max.descendants
946 A read-write single value files. The default is "max".
948 Maximum allowed number of descent cgroups.
949 If the actual number of descendants is equal or larger,
950 an attempt to create a new cgroup in the hierarchy will fail.
953 A read-write single value files. The default is "max".
955 Maximum allowed descent depth below the current cgroup.
956 If the actual descent depth is equal or larger,
957 an attempt to create a new child cgroup will fail.
960 A read-only flat-keyed file with the following entries:
963 Total number of visible descendant cgroups.
966 Total number of dying descendant cgroups. A cgroup becomes
967 dying after being deleted by a user. The cgroup will remain
968 in dying state for some time undefined time (which can depend
969 on system load) before being completely destroyed.
971 A process can't enter a dying cgroup under any circumstances,
972 a dying cgroup can't revive.
974 A dying cgroup can consume system resources not exceeding
975 limits, which were active at the moment of cgroup deletion.
978 A read-write single value file which exists on non-root cgroups.
979 Allowed values are "0" and "1". The default is "0".
981 Writing "1" to the file causes freezing of the cgroup and all
982 descendant cgroups. This means that all belonging processes will
983 be stopped and will not run until the cgroup will be explicitly
984 unfrozen. Freezing of the cgroup may take some time; when this action
985 is completed, the "frozen" value in the cgroup.events control file
986 will be updated to "1" and the corresponding notification will be
989 A cgroup can be frozen either by its own settings, or by settings
990 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
991 cgroup will remain frozen.
993 Processes in the frozen cgroup can be killed by a fatal signal.
994 They also can enter and leave a frozen cgroup: either by an explicit
995 move by a user, or if freezing of the cgroup races with fork().
996 If a process is moved to a frozen cgroup, it stops. If a process is
997 moved out of a frozen cgroup, it becomes running.
999 Frozen status of a cgroup doesn't affect any cgroup tree operations:
1000 it's possible to delete a frozen (and empty) cgroup, as well as
1001 create new sub-cgroups.
1004 A write-only single value file which exists in non-root cgroups.
1005 The only allowed value is "1".
1007 Writing "1" to the file causes the cgroup and all descendant cgroups to
1008 be killed. This means that all processes located in the affected cgroup
1009 tree will be killed via SIGKILL.
1011 Killing a cgroup tree will deal with concurrent forks appropriately and
1012 is protected against migrations.
1014 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1015 killing cgroups is a process directed operation, i.e. it affects
1016 the whole thread-group.
1019 A read-write single value file that allowed values are "0" and "1".
1022 Writing "0" to the file will disable the cgroup PSI accounting.
1023 Writing "1" to the file will re-enable the cgroup PSI accounting.
1025 This control attribute is not hierarchical, so disable or enable PSI
1026 accounting in a cgroup does not affect PSI accounting in descendants
1027 and doesn't need pass enablement via ancestors from root.
1029 The reason this control attribute exists is that PSI accounts stalls for
1030 each cgroup separately and aggregates it at each level of the hierarchy.
1031 This may cause non-negligible overhead for some workloads when under
1032 deep level of the hierarchy, in which case this control attribute can
1033 be used to disable PSI accounting in the non-leaf cgroups.
1036 A read-write nested-keyed file.
1038 Shows pressure stall information for IRQ/SOFTIRQ. See
1039 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1049 The "cpu" controllers regulates distribution of CPU cycles. This
1050 controller implements weight and absolute bandwidth limit models for
1051 normal scheduling policy and absolute bandwidth allocation model for
1052 realtime scheduling policy.
1054 In all the above models, cycles distribution is defined only on a temporal
1055 base and it does not account for the frequency at which tasks are executed.
1056 The (optional) utilization clamping support allows to hint the schedutil
1057 cpufreq governor about the minimum desired frequency which should always be
1058 provided by a CPU, as well as the maximum desired frequency, which should not
1059 be exceeded by a CPU.
1061 WARNING: cgroup2 doesn't yet support control of realtime processes and
1062 the cpu controller can only be enabled when all RT processes are in
1063 the root cgroup. Be aware that system management software may already
1064 have placed RT processes into nonroot cgroups during the system boot
1065 process, and these processes may need to be moved to the root cgroup
1066 before the cpu controller can be enabled.
1072 All time durations are in microseconds.
1075 A read-only flat-keyed file.
1076 This file exists whether the controller is enabled or not.
1078 It always reports the following three stats:
1084 and the following five when the controller is enabled:
1093 A read-write single value file which exists on non-root
1094 cgroups. The default is "100".
1096 For non idle groups (cpu.idle = 0), the weight is in the
1099 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1100 then the weight will show as a 0.
1103 A read-write single value file which exists on non-root
1104 cgroups. The default is "0".
1106 The nice value is in the range [-20, 19].
1108 This interface file is an alternative interface for
1109 "cpu.weight" and allows reading and setting weight using the
1110 same values used by nice(2). Because the range is smaller and
1111 granularity is coarser for the nice values, the read value is
1112 the closest approximation of the current weight.
1115 A read-write two value file which exists on non-root cgroups.
1116 The default is "max 100000".
1118 The maximum bandwidth limit. It's in the following format::
1122 which indicates that the group may consume up to $MAX in each
1123 $PERIOD duration. "max" for $MAX indicates no limit. If only
1124 one number is written, $MAX is updated.
1127 A read-write single value file which exists on non-root
1128 cgroups. The default is "0".
1130 The burst in the range [0, $MAX].
1133 A read-write nested-keyed file.
1135 Shows pressure stall information for CPU. See
1136 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1139 A read-write single value file which exists on non-root cgroups.
1140 The default is "0", i.e. no utilization boosting.
1142 The requested minimum utilization (protection) as a percentage
1143 rational number, e.g. 12.34 for 12.34%.
1145 This interface allows reading and setting minimum utilization clamp
1146 values similar to the sched_setattr(2). This minimum utilization
1147 value is used to clamp the task specific minimum utilization clamp.
1149 The requested minimum utilization (protection) is always capped by
1150 the current value for the maximum utilization (limit), i.e.
1154 A read-write single value file which exists on non-root cgroups.
1155 The default is "max". i.e. no utilization capping
1157 The requested maximum utilization (limit) as a percentage rational
1158 number, e.g. 98.76 for 98.76%.
1160 This interface allows reading and setting maximum utilization clamp
1161 values similar to the sched_setattr(2). This maximum utilization
1162 value is used to clamp the task specific maximum utilization clamp.
1165 A read-write single value file which exists on non-root cgroups.
1168 This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1169 Setting this value to a 1 will make the scheduling policy of the
1170 cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1171 own relative priorities, but the cgroup itself will be treated as
1172 very low priority relative to its peers.
1179 The "memory" controller regulates distribution of memory. Memory is
1180 stateful and implements both limit and protection models. Due to the
1181 intertwining between memory usage and reclaim pressure and the
1182 stateful nature of memory, the distribution model is relatively
1185 While not completely water-tight, all major memory usages by a given
1186 cgroup are tracked so that the total memory consumption can be
1187 accounted and controlled to a reasonable extent. Currently, the
1188 following types of memory usages are tracked.
1190 - Userland memory - page cache and anonymous memory.
1192 - Kernel data structures such as dentries and inodes.
1194 - TCP socket buffers.
1196 The above list may expand in the future for better coverage.
1199 Memory Interface Files
1200 ~~~~~~~~~~~~~~~~~~~~~~
1202 All memory amounts are in bytes. If a value which is not aligned to
1203 PAGE_SIZE is written, the value may be rounded up to the closest
1204 PAGE_SIZE multiple when read back.
1207 A read-only single value file which exists on non-root
1210 The total amount of memory currently being used by the cgroup
1211 and its descendants.
1214 A read-write single value file which exists on non-root
1215 cgroups. The default is "0".
1217 Hard memory protection. If the memory usage of a cgroup
1218 is within its effective min boundary, the cgroup's memory
1219 won't be reclaimed under any conditions. If there is no
1220 unprotected reclaimable memory available, OOM killer
1221 is invoked. Above the effective min boundary (or
1222 effective low boundary if it is higher), pages are reclaimed
1223 proportionally to the overage, reducing reclaim pressure for
1226 Effective min boundary is limited by memory.min values of
1227 all ancestor cgroups. If there is memory.min overcommitment
1228 (child cgroup or cgroups are requiring more protected memory
1229 than parent will allow), then each child cgroup will get
1230 the part of parent's protection proportional to its
1231 actual memory usage below memory.min.
1233 Putting more memory than generally available under this
1234 protection is discouraged and may lead to constant OOMs.
1236 If a memory cgroup is not populated with processes,
1237 its memory.min is ignored.
1240 A read-write single value file which exists on non-root
1241 cgroups. The default is "0".
1243 Best-effort memory protection. If the memory usage of a
1244 cgroup is within its effective low boundary, the cgroup's
1245 memory won't be reclaimed unless there is no reclaimable
1246 memory available in unprotected cgroups.
1247 Above the effective low boundary (or
1248 effective min boundary if it is higher), pages are reclaimed
1249 proportionally to the overage, reducing reclaim pressure for
1252 Effective low boundary is limited by memory.low values of
1253 all ancestor cgroups. If there is memory.low overcommitment
1254 (child cgroup or cgroups are requiring more protected memory
1255 than parent will allow), then each child cgroup will get
1256 the part of parent's protection proportional to its
1257 actual memory usage below memory.low.
1259 Putting more memory than generally available under this
1260 protection is discouraged.
1263 A read-write single value file which exists on non-root
1264 cgroups. The default is "max".
1266 Memory usage throttle limit. If a cgroup's usage goes
1267 over the high boundary, the processes of the cgroup are
1268 throttled and put under heavy reclaim pressure.
1270 Going over the high limit never invokes the OOM killer and
1271 under extreme conditions the limit may be breached. The high
1272 limit should be used in scenarios where an external process
1273 monitors the limited cgroup to alleviate heavy reclaim
1277 A read-write single value file which exists on non-root
1278 cgroups. The default is "max".
1280 Memory usage hard limit. This is the main mechanism to limit
1281 memory usage of a cgroup. If a cgroup's memory usage reaches
1282 this limit and can't be reduced, the OOM killer is invoked in
1283 the cgroup. Under certain circumstances, the usage may go
1284 over the limit temporarily.
1286 In default configuration regular 0-order allocations always
1287 succeed unless OOM killer chooses current task as a victim.
1289 Some kinds of allocations don't invoke the OOM killer.
1290 Caller could retry them differently, return into userspace
1291 as -ENOMEM or silently ignore in cases like disk readahead.
1294 A write-only nested-keyed file which exists for all cgroups.
1296 This is a simple interface to trigger memory reclaim in the
1299 This file accepts a single key, the number of bytes to reclaim.
1300 No nested keys are currently supported.
1304 echo "1G" > memory.reclaim
1306 The interface can be later extended with nested keys to
1307 configure the reclaim behavior. For example, specify the
1308 type of memory to reclaim from (anon, file, ..).
1310 Please note that the kernel can over or under reclaim from
1311 the target cgroup. If less bytes are reclaimed than the
1312 specified amount, -EAGAIN is returned.
1314 Please note that the proactive reclaim (triggered by this
1315 interface) is not meant to indicate memory pressure on the
1316 memory cgroup. Therefore socket memory balancing triggered by
1317 the memory reclaim normally is not exercised in this case.
1318 This means that the networking layer will not adapt based on
1319 reclaim induced by memory.reclaim.
1322 A read-only single value file which exists on non-root
1325 The max memory usage recorded for the cgroup and its
1326 descendants since the creation of the cgroup.
1329 A read-write single value file which exists on non-root
1330 cgroups. The default value is "0".
1332 Determines whether the cgroup should be treated as
1333 an indivisible workload by the OOM killer. If set,
1334 all tasks belonging to the cgroup or to its descendants
1335 (if the memory cgroup is not a leaf cgroup) are killed
1336 together or not at all. This can be used to avoid
1337 partial kills to guarantee workload integrity.
1339 Tasks with the OOM protection (oom_score_adj set to -1000)
1340 are treated as an exception and are never killed.
1342 If the OOM killer is invoked in a cgroup, it's not going
1343 to kill any tasks outside of this cgroup, regardless
1344 memory.oom.group values of ancestor cgroups.
1347 A read-only flat-keyed file which exists on non-root cgroups.
1348 The following entries are defined. Unless specified
1349 otherwise, a value change in this file generates a file
1352 Note that all fields in this file are hierarchical and the
1353 file modified event can be generated due to an event down the
1354 hierarchy. For the local events at the cgroup level see
1355 memory.events.local.
1358 The number of times the cgroup is reclaimed due to
1359 high memory pressure even though its usage is under
1360 the low boundary. This usually indicates that the low
1361 boundary is over-committed.
1364 The number of times processes of the cgroup are
1365 throttled and routed to perform direct memory reclaim
1366 because the high memory boundary was exceeded. For a
1367 cgroup whose memory usage is capped by the high limit
1368 rather than global memory pressure, this event's
1369 occurrences are expected.
1372 The number of times the cgroup's memory usage was
1373 about to go over the max boundary. If direct reclaim
1374 fails to bring it down, the cgroup goes to OOM state.
1377 The number of time the cgroup's memory usage was
1378 reached the limit and allocation was about to fail.
1380 This event is not raised if the OOM killer is not
1381 considered as an option, e.g. for failed high-order
1382 allocations or if caller asked to not retry attempts.
1385 The number of processes belonging to this cgroup
1386 killed by any kind of OOM killer.
1389 The number of times a group OOM has occurred.
1392 Similar to memory.events but the fields in the file are local
1393 to the cgroup i.e. not hierarchical. The file modified event
1394 generated on this file reflects only the local events.
1397 A read-only flat-keyed file which exists on non-root cgroups.
1399 This breaks down the cgroup's memory footprint into different
1400 types of memory, type-specific details, and other information
1401 on the state and past events of the memory management system.
1403 All memory amounts are in bytes.
1405 The entries are ordered to be human readable, and new entries
1406 can show up in the middle. Don't rely on items remaining in a
1407 fixed position; use the keys to look up specific values!
1409 If the entry has no per-node counter (or not show in the
1410 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1411 to indicate that it will not show in the memory.numa_stat.
1414 Amount of memory used in anonymous mappings such as
1415 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1418 Amount of memory used to cache filesystem data,
1419 including tmpfs and shared memory.
1422 Amount of total kernel memory, including
1423 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1424 addition to other kernel memory use cases.
1427 Amount of memory allocated to kernel stacks.
1430 Amount of memory allocated for page tables.
1433 Amount of memory allocated for secondary page tables,
1434 this currently includes KVM mmu allocations on x86
1438 Amount of memory used for storing per-cpu kernel
1442 Amount of memory used in network transmission buffers
1445 Amount of memory used for vmap backed memory.
1448 Amount of cached filesystem data that is swap-backed,
1449 such as tmpfs, shm segments, shared anonymous mmap()s
1452 Amount of memory consumed by the zswap compression backend.
1455 Amount of application memory swapped out to zswap.
1458 Amount of cached filesystem data mapped with mmap()
1461 Amount of cached filesystem data that was modified but
1462 not yet written back to disk
1465 Amount of cached filesystem data that was modified and
1466 is currently being written back to disk
1469 Amount of swap cached in memory. The swapcache is accounted
1470 against both memory and swap usage.
1473 Amount of memory used in anonymous mappings backed by
1474 transparent hugepages
1477 Amount of cached filesystem data backed by transparent
1481 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1482 transparent hugepages
1484 inactive_anon, active_anon, inactive_file, active_file, unevictable
1485 Amount of memory, swap-backed and filesystem-backed,
1486 on the internal memory management lists used by the
1487 page reclaim algorithm.
1489 As these represent internal list state (eg. shmem pages are on anon
1490 memory management lists), inactive_foo + active_foo may not be equal to
1491 the value for the foo counter, since the foo counter is type-based, not
1495 Part of "slab" that might be reclaimed, such as
1496 dentries and inodes.
1499 Part of "slab" that cannot be reclaimed on memory
1503 Amount of memory used for storing in-kernel data
1506 workingset_refault_anon
1507 Number of refaults of previously evicted anonymous pages.
1509 workingset_refault_file
1510 Number of refaults of previously evicted file pages.
1512 workingset_activate_anon
1513 Number of refaulted anonymous pages that were immediately
1516 workingset_activate_file
1517 Number of refaulted file pages that were immediately activated.
1519 workingset_restore_anon
1520 Number of restored anonymous pages which have been detected as
1521 an active workingset before they got reclaimed.
1523 workingset_restore_file
1524 Number of restored file pages which have been detected as an
1525 active workingset before they got reclaimed.
1527 workingset_nodereclaim
1528 Number of times a shadow node has been reclaimed
1531 Amount of scanned pages (in an inactive LRU list)
1534 Amount of reclaimed pages
1537 Amount of scanned pages by kswapd (in an inactive LRU list)
1540 Amount of scanned pages directly (in an inactive LRU list)
1542 pgscan_khugepaged (npn)
1543 Amount of scanned pages by khugepaged (in an inactive LRU list)
1545 pgsteal_kswapd (npn)
1546 Amount of reclaimed pages by kswapd
1548 pgsteal_direct (npn)
1549 Amount of reclaimed pages directly
1551 pgsteal_khugepaged (npn)
1552 Amount of reclaimed pages by khugepaged
1555 Total number of page faults incurred
1558 Number of major page faults incurred
1561 Amount of scanned pages (in an active LRU list)
1564 Amount of pages moved to the active LRU list
1567 Amount of pages moved to the inactive LRU list
1570 Amount of pages postponed to be freed under memory pressure
1573 Amount of reclaimed lazyfree pages
1575 thp_fault_alloc (npn)
1576 Number of transparent hugepages which were allocated to satisfy
1577 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1580 thp_collapse_alloc (npn)
1581 Number of transparent hugepages which were allocated to allow
1582 collapsing an existing range of pages. This counter is not
1583 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1586 Number of transparent hugepages which are swapout in one piece
1589 thp_swpout_fallback (npn)
1590 Number of transparent hugepages which were split before swapout.
1591 Usually because failed to allocate some continuous swap space
1595 A read-only nested-keyed file which exists on non-root cgroups.
1597 This breaks down the cgroup's memory footprint into different
1598 types of memory, type-specific details, and other information
1599 per node on the state of the memory management system.
1601 This is useful for providing visibility into the NUMA locality
1602 information within an memcg since the pages are allowed to be
1603 allocated from any physical node. One of the use case is evaluating
1604 application performance by combining this information with the
1605 application's CPU allocation.
1607 All memory amounts are in bytes.
1609 The output format of memory.numa_stat is::
1611 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1613 The entries are ordered to be human readable, and new entries
1614 can show up in the middle. Don't rely on items remaining in a
1615 fixed position; use the keys to look up specific values!
1617 The entries can refer to the memory.stat.
1620 A read-only single value file which exists on non-root
1623 The total amount of swap currently being used by the cgroup
1624 and its descendants.
1627 A read-write single value file which exists on non-root
1628 cgroups. The default is "max".
1630 Swap usage throttle limit. If a cgroup's swap usage exceeds
1631 this limit, all its further allocations will be throttled to
1632 allow userspace to implement custom out-of-memory procedures.
1634 This limit marks a point of no return for the cgroup. It is NOT
1635 designed to manage the amount of swapping a workload does
1636 during regular operation. Compare to memory.swap.max, which
1637 prohibits swapping past a set amount, but lets the cgroup
1638 continue unimpeded as long as other memory can be reclaimed.
1640 Healthy workloads are not expected to reach this limit.
1643 A read-only single value file which exists on non-root
1646 The max swap usage recorded for the cgroup and its
1647 descendants since the creation of the cgroup.
1650 A read-write single value file which exists on non-root
1651 cgroups. The default is "max".
1653 Swap usage hard limit. If a cgroup's swap usage reaches this
1654 limit, anonymous memory of the cgroup will not be swapped out.
1657 A read-only flat-keyed file which exists on non-root cgroups.
1658 The following entries are defined. Unless specified
1659 otherwise, a value change in this file generates a file
1663 The number of times the cgroup's swap usage was over
1667 The number of times the cgroup's swap usage was about
1668 to go over the max boundary and swap allocation
1672 The number of times swap allocation failed either
1673 because of running out of swap system-wide or max
1676 When reduced under the current usage, the existing swap
1677 entries are reclaimed gradually and the swap usage may stay
1678 higher than the limit for an extended period of time. This
1679 reduces the impact on the workload and memory management.
1681 memory.zswap.current
1682 A read-only single value file which exists on non-root
1685 The total amount of memory consumed by the zswap compression
1689 A read-write single value file which exists on non-root
1690 cgroups. The default is "max".
1692 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1693 limit, it will refuse to take any more stores before existing
1694 entries fault back in or are written out to disk.
1696 memory.zswap.writeback
1697 A read-write single value file. The default value is "1". The
1698 initial value of the root cgroup is 1, and when a new cgroup is
1699 created, it inherits the current value of its parent.
1701 When this is set to 0, all swapping attempts to swapping devices
1702 are disabled. This included both zswap writebacks, and swapping due
1703 to zswap store failures. If the zswap store failures are recurring
1704 (for e.g if the pages are incompressible), users can observe
1705 reclaim inefficiency after disabling writeback (because the same
1706 pages might be rejected again and again).
1708 Note that this is subtly different from setting memory.swap.max to
1709 0, as it still allows for pages to be written to the zswap pool.
1712 A read-only nested-keyed file.
1714 Shows pressure stall information for memory. See
1715 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1721 "memory.high" is the main mechanism to control memory usage.
1722 Over-committing on high limit (sum of high limits > available memory)
1723 and letting global memory pressure to distribute memory according to
1724 usage is a viable strategy.
1726 Because breach of the high limit doesn't trigger the OOM killer but
1727 throttles the offending cgroup, a management agent has ample
1728 opportunities to monitor and take appropriate actions such as granting
1729 more memory or terminating the workload.
1731 Determining whether a cgroup has enough memory is not trivial as
1732 memory usage doesn't indicate whether the workload can benefit from
1733 more memory. For example, a workload which writes data received from
1734 network to a file can use all available memory but can also operate as
1735 performant with a small amount of memory. A measure of memory
1736 pressure - how much the workload is being impacted due to lack of
1737 memory - is necessary to determine whether a workload needs more
1738 memory; unfortunately, memory pressure monitoring mechanism isn't
1745 A memory area is charged to the cgroup which instantiated it and stays
1746 charged to the cgroup until the area is released. Migrating a process
1747 to a different cgroup doesn't move the memory usages that it
1748 instantiated while in the previous cgroup to the new cgroup.
1750 A memory area may be used by processes belonging to different cgroups.
1751 To which cgroup the area will be charged is in-deterministic; however,
1752 over time, the memory area is likely to end up in a cgroup which has
1753 enough memory allowance to avoid high reclaim pressure.
1755 If a cgroup sweeps a considerable amount of memory which is expected
1756 to be accessed repeatedly by other cgroups, it may make sense to use
1757 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1758 belonging to the affected files to ensure correct memory ownership.
1764 The "io" controller regulates the distribution of IO resources. This
1765 controller implements both weight based and absolute bandwidth or IOPS
1766 limit distribution; however, weight based distribution is available
1767 only if cfq-iosched is in use and neither scheme is available for
1775 A read-only nested-keyed file.
1777 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1778 The following nested keys are defined.
1780 ====== =====================
1782 wbytes Bytes written
1783 rios Number of read IOs
1784 wios Number of write IOs
1785 dbytes Bytes discarded
1786 dios Number of discard IOs
1787 ====== =====================
1789 An example read output follows::
1791 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1792 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1795 A read-write nested-keyed file which exists only on the root
1798 This file configures the Quality of Service of the IO cost
1799 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1800 currently implements "io.weight" proportional control. Lines
1801 are keyed by $MAJ:$MIN device numbers and not ordered. The
1802 line for a given device is populated on the first write for
1803 the device on "io.cost.qos" or "io.cost.model". The following
1804 nested keys are defined.
1806 ====== =====================================
1807 enable Weight-based control enable
1808 ctrl "auto" or "user"
1809 rpct Read latency percentile [0, 100]
1810 rlat Read latency threshold
1811 wpct Write latency percentile [0, 100]
1812 wlat Write latency threshold
1813 min Minimum scaling percentage [1, 10000]
1814 max Maximum scaling percentage [1, 10000]
1815 ====== =====================================
1817 The controller is disabled by default and can be enabled by
1818 setting "enable" to 1. "rpct" and "wpct" parameters default
1819 to zero and the controller uses internal device saturation
1820 state to adjust the overall IO rate between "min" and "max".
1822 When a better control quality is needed, latency QoS
1823 parameters can be configured. For example::
1825 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1827 shows that on sdb, the controller is enabled, will consider
1828 the device saturated if the 95th percentile of read completion
1829 latencies is above 75ms or write 150ms, and adjust the overall
1830 IO issue rate between 50% and 150% accordingly.
1832 The lower the saturation point, the better the latency QoS at
1833 the cost of aggregate bandwidth. The narrower the allowed
1834 adjustment range between "min" and "max", the more conformant
1835 to the cost model the IO behavior. Note that the IO issue
1836 base rate may be far off from 100% and setting "min" and "max"
1837 blindly can lead to a significant loss of device capacity or
1838 control quality. "min" and "max" are useful for regulating
1839 devices which show wide temporary behavior changes - e.g. a
1840 ssd which accepts writes at the line speed for a while and
1841 then completely stalls for multiple seconds.
1843 When "ctrl" is "auto", the parameters are controlled by the
1844 kernel and may change automatically. Setting "ctrl" to "user"
1845 or setting any of the percentile and latency parameters puts
1846 it into "user" mode and disables the automatic changes. The
1847 automatic mode can be restored by setting "ctrl" to "auto".
1850 A read-write nested-keyed file which exists only on the root
1853 This file configures the cost model of the IO cost model based
1854 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1855 implements "io.weight" proportional control. Lines are keyed
1856 by $MAJ:$MIN device numbers and not ordered. The line for a
1857 given device is populated on the first write for the device on
1858 "io.cost.qos" or "io.cost.model". The following nested keys
1861 ===== ================================
1862 ctrl "auto" or "user"
1863 model The cost model in use - "linear"
1864 ===== ================================
1866 When "ctrl" is "auto", the kernel may change all parameters
1867 dynamically. When "ctrl" is set to "user" or any other
1868 parameters are written to, "ctrl" become "user" and the
1869 automatic changes are disabled.
1871 When "model" is "linear", the following model parameters are
1874 ============= ========================================
1875 [r|w]bps The maximum sequential IO throughput
1876 [r|w]seqiops The maximum 4k sequential IOs per second
1877 [r|w]randiops The maximum 4k random IOs per second
1878 ============= ========================================
1880 From the above, the builtin linear model determines the base
1881 costs of a sequential and random IO and the cost coefficient
1882 for the IO size. While simple, this model can cover most
1883 common device classes acceptably.
1885 The IO cost model isn't expected to be accurate in absolute
1886 sense and is scaled to the device behavior dynamically.
1888 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1889 generate device-specific coefficients.
1892 A read-write flat-keyed file which exists on non-root cgroups.
1893 The default is "default 100".
1895 The first line is the default weight applied to devices
1896 without specific override. The rest are overrides keyed by
1897 $MAJ:$MIN device numbers and not ordered. The weights are in
1898 the range [1, 10000] and specifies the relative amount IO time
1899 the cgroup can use in relation to its siblings.
1901 The default weight can be updated by writing either "default
1902 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1903 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1905 An example read output follows::
1912 A read-write nested-keyed file which exists on non-root
1915 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1916 device numbers and not ordered. The following nested keys are
1919 ===== ==================================
1920 rbps Max read bytes per second
1921 wbps Max write bytes per second
1922 riops Max read IO operations per second
1923 wiops Max write IO operations per second
1924 ===== ==================================
1926 When writing, any number of nested key-value pairs can be
1927 specified in any order. "max" can be specified as the value
1928 to remove a specific limit. If the same key is specified
1929 multiple times, the outcome is undefined.
1931 BPS and IOPS are measured in each IO direction and IOs are
1932 delayed if limit is reached. Temporary bursts are allowed.
1934 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1936 echo "8:16 rbps=2097152 wiops=120" > io.max
1938 Reading returns the following::
1940 8:16 rbps=2097152 wbps=max riops=max wiops=120
1942 Write IOPS limit can be removed by writing the following::
1944 echo "8:16 wiops=max" > io.max
1946 Reading now returns the following::
1948 8:16 rbps=2097152 wbps=max riops=max wiops=max
1951 A read-only nested-keyed file.
1953 Shows pressure stall information for IO. See
1954 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1960 Page cache is dirtied through buffered writes and shared mmaps and
1961 written asynchronously to the backing filesystem by the writeback
1962 mechanism. Writeback sits between the memory and IO domains and
1963 regulates the proportion of dirty memory by balancing dirtying and
1966 The io controller, in conjunction with the memory controller,
1967 implements control of page cache writeback IOs. The memory controller
1968 defines the memory domain that dirty memory ratio is calculated and
1969 maintained for and the io controller defines the io domain which
1970 writes out dirty pages for the memory domain. Both system-wide and
1971 per-cgroup dirty memory states are examined and the more restrictive
1972 of the two is enforced.
1974 cgroup writeback requires explicit support from the underlying
1975 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
1976 btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
1977 attributed to the root cgroup.
1979 There are inherent differences in memory and writeback management
1980 which affects how cgroup ownership is tracked. Memory is tracked per
1981 page while writeback per inode. For the purpose of writeback, an
1982 inode is assigned to a cgroup and all IO requests to write dirty pages
1983 from the inode are attributed to that cgroup.
1985 As cgroup ownership for memory is tracked per page, there can be pages
1986 which are associated with different cgroups than the one the inode is
1987 associated with. These are called foreign pages. The writeback
1988 constantly keeps track of foreign pages and, if a particular foreign
1989 cgroup becomes the majority over a certain period of time, switches
1990 the ownership of the inode to that cgroup.
1992 While this model is enough for most use cases where a given inode is
1993 mostly dirtied by a single cgroup even when the main writing cgroup
1994 changes over time, use cases where multiple cgroups write to a single
1995 inode simultaneously are not supported well. In such circumstances, a
1996 significant portion of IOs are likely to be attributed incorrectly.
1997 As memory controller assigns page ownership on the first use and
1998 doesn't update it until the page is released, even if writeback
1999 strictly follows page ownership, multiple cgroups dirtying overlapping
2000 areas wouldn't work as expected. It's recommended to avoid such usage
2003 The sysctl knobs which affect writeback behavior are applied to cgroup
2004 writeback as follows.
2006 vm.dirty_background_ratio, vm.dirty_ratio
2007 These ratios apply the same to cgroup writeback with the
2008 amount of available memory capped by limits imposed by the
2009 memory controller and system-wide clean memory.
2011 vm.dirty_background_bytes, vm.dirty_bytes
2012 For cgroup writeback, this is calculated into ratio against
2013 total available memory and applied the same way as
2014 vm.dirty[_background]_ratio.
2020 This is a cgroup v2 controller for IO workload protection. You provide a group
2021 with a latency target, and if the average latency exceeds that target the
2022 controller will throttle any peers that have a lower latency target than the
2025 The limits are only applied at the peer level in the hierarchy. This means that
2026 in the diagram below, only groups A, B, and C will influence each other, and
2027 groups D and F will influence each other. Group G will influence nobody::
2036 So the ideal way to configure this is to set io.latency in groups A, B, and C.
2037 Generally you do not want to set a value lower than the latency your device
2038 supports. Experiment to find the value that works best for your workload.
2039 Start at higher than the expected latency for your device and watch the
2040 avg_lat value in io.stat for your workload group to get an idea of the
2041 latency you see during normal operation. Use the avg_lat value as a basis for
2042 your real setting, setting at 10-15% higher than the value in io.stat.
2044 How IO Latency Throttling Works
2045 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2047 io.latency is work conserving; so as long as everybody is meeting their latency
2048 target the controller doesn't do anything. Once a group starts missing its
2049 target it begins throttling any peer group that has a higher target than itself.
2050 This throttling takes 2 forms:
2052 - Queue depth throttling. This is the number of outstanding IO's a group is
2053 allowed to have. We will clamp down relatively quickly, starting at no limit
2054 and going all the way down to 1 IO at a time.
2056 - Artificial delay induction. There are certain types of IO that cannot be
2057 throttled without possibly adversely affecting higher priority groups. This
2058 includes swapping and metadata IO. These types of IO are allowed to occur
2059 normally, however they are "charged" to the originating group. If the
2060 originating group is being throttled you will see the use_delay and delay
2061 fields in io.stat increase. The delay value is how many microseconds that are
2062 being added to any process that runs in this group. Because this number can
2063 grow quite large if there is a lot of swapping or metadata IO occurring we
2064 limit the individual delay events to 1 second at a time.
2066 Once the victimized group starts meeting its latency target again it will start
2067 unthrottling any peer groups that were throttled previously. If the victimized
2068 group simply stops doing IO the global counter will unthrottle appropriately.
2070 IO Latency Interface Files
2071 ~~~~~~~~~~~~~~~~~~~~~~~~~~
2074 This takes a similar format as the other controllers.
2076 "MAJOR:MINOR target=<target time in microseconds>"
2079 If the controller is enabled you will see extra stats in io.stat in
2080 addition to the normal ones.
2083 This is the current queue depth for the group.
2086 This is an exponential moving average with a decay rate of 1/exp
2087 bound by the sampling interval. The decay rate interval can be
2088 calculated by multiplying the win value in io.stat by the
2089 corresponding number of samples based on the win value.
2092 The sampling window size in milliseconds. This is the minimum
2093 duration of time between evaluation events. Windows only elapse
2094 with IO activity. Idle periods extend the most recent window.
2099 A single attribute controls the behavior of the I/O priority cgroup policy,
2100 namely the io.prio.class attribute. The following values are accepted for
2104 Do not modify the I/O priority class.
2107 For requests that have a non-RT I/O priority class, change it into RT.
2108 Also change the priority level of these requests to 4. Do not modify
2109 the I/O priority of requests that have priority class RT.
2112 For requests that do not have an I/O priority class or that have I/O
2113 priority class RT, change it into BE. Also change the priority level
2114 of these requests to 0. Do not modify the I/O priority class of
2115 requests that have priority class IDLE.
2118 Change the I/O priority class of all requests into IDLE, the lowest
2122 Deprecated. Just an alias for promote-to-rt.
2124 The following numerical values are associated with the I/O priority policies:
2126 +----------------+---+
2128 +----------------+---+
2129 | promote-to-rt | 1 |
2130 +----------------+---+
2131 | restrict-to-be | 2 |
2132 +----------------+---+
2134 +----------------+---+
2136 The numerical value that corresponds to each I/O priority class is as follows:
2138 +-------------------------------+---+
2139 | IOPRIO_CLASS_NONE | 0 |
2140 +-------------------------------+---+
2141 | IOPRIO_CLASS_RT (real-time) | 1 |
2142 +-------------------------------+---+
2143 | IOPRIO_CLASS_BE (best effort) | 2 |
2144 +-------------------------------+---+
2145 | IOPRIO_CLASS_IDLE | 3 |
2146 +-------------------------------+---+
2148 The algorithm to set the I/O priority class for a request is as follows:
2150 - If I/O priority class policy is promote-to-rt, change the request I/O
2151 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2153 - If I/O priority class policy is not promote-to-rt, translate the I/O priority
2154 class policy into a number, then change the request I/O priority class
2155 into the maximum of the I/O priority class policy number and the numerical
2161 The process number controller is used to allow a cgroup to stop any
2162 new tasks from being fork()'d or clone()'d after a specified limit is
2165 The number of tasks in a cgroup can be exhausted in ways which other
2166 controllers cannot prevent, thus warranting its own controller. For
2167 example, a fork bomb is likely to exhaust the number of tasks before
2168 hitting memory restrictions.
2170 Note that PIDs used in this controller refer to TIDs, process IDs as
2178 A read-write single value file which exists on non-root
2179 cgroups. The default is "max".
2181 Hard limit of number of processes.
2184 A read-only single value file which exists on all cgroups.
2186 The number of processes currently in the cgroup and its
2189 Organisational operations are not blocked by cgroup policies, so it is
2190 possible to have pids.current > pids.max. This can be done by either
2191 setting the limit to be smaller than pids.current, or attaching enough
2192 processes to the cgroup such that pids.current is larger than
2193 pids.max. However, it is not possible to violate a cgroup PID policy
2194 through fork() or clone(). These will return -EAGAIN if the creation
2195 of a new process would cause a cgroup policy to be violated.
2201 The "cpuset" controller provides a mechanism for constraining
2202 the CPU and memory node placement of tasks to only the resources
2203 specified in the cpuset interface files in a task's current cgroup.
2204 This is especially valuable on large NUMA systems where placing jobs
2205 on properly sized subsets of the systems with careful processor and
2206 memory placement to reduce cross-node memory access and contention
2207 can improve overall system performance.
2209 The "cpuset" controller is hierarchical. That means the controller
2210 cannot use CPUs or memory nodes not allowed in its parent.
2213 Cpuset Interface Files
2214 ~~~~~~~~~~~~~~~~~~~~~~
2217 A read-write multiple values file which exists on non-root
2218 cpuset-enabled cgroups.
2220 It lists the requested CPUs to be used by tasks within this
2221 cgroup. The actual list of CPUs to be granted, however, is
2222 subjected to constraints imposed by its parent and can differ
2223 from the requested CPUs.
2225 The CPU numbers are comma-separated numbers or ranges.
2231 An empty value indicates that the cgroup is using the same
2232 setting as the nearest cgroup ancestor with a non-empty
2233 "cpuset.cpus" or all the available CPUs if none is found.
2235 The value of "cpuset.cpus" stays constant until the next update
2236 and won't be affected by any CPU hotplug events.
2238 cpuset.cpus.effective
2239 A read-only multiple values file which exists on all
2240 cpuset-enabled cgroups.
2242 It lists the onlined CPUs that are actually granted to this
2243 cgroup by its parent. These CPUs are allowed to be used by
2244 tasks within the current cgroup.
2246 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2247 all the CPUs from the parent cgroup that can be available to
2248 be used by this cgroup. Otherwise, it should be a subset of
2249 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2250 can be granted. In this case, it will be treated just like an
2251 empty "cpuset.cpus".
2253 Its value will be affected by CPU hotplug events.
2256 A read-write multiple values file which exists on non-root
2257 cpuset-enabled cgroups.
2259 It lists the requested memory nodes to be used by tasks within
2260 this cgroup. The actual list of memory nodes granted, however,
2261 is subjected to constraints imposed by its parent and can differ
2262 from the requested memory nodes.
2264 The memory node numbers are comma-separated numbers or ranges.
2270 An empty value indicates that the cgroup is using the same
2271 setting as the nearest cgroup ancestor with a non-empty
2272 "cpuset.mems" or all the available memory nodes if none
2275 The value of "cpuset.mems" stays constant until the next update
2276 and won't be affected by any memory nodes hotplug events.
2278 Setting a non-empty value to "cpuset.mems" causes memory of
2279 tasks within the cgroup to be migrated to the designated nodes if
2280 they are currently using memory outside of the designated nodes.
2282 There is a cost for this memory migration. The migration
2283 may not be complete and some memory pages may be left behind.
2284 So it is recommended that "cpuset.mems" should be set properly
2285 before spawning new tasks into the cpuset. Even if there is
2286 a need to change "cpuset.mems" with active tasks, it shouldn't
2289 cpuset.mems.effective
2290 A read-only multiple values file which exists on all
2291 cpuset-enabled cgroups.
2293 It lists the onlined memory nodes that are actually granted to
2294 this cgroup by its parent. These memory nodes are allowed to
2295 be used by tasks within the current cgroup.
2297 If "cpuset.mems" is empty, it shows all the memory nodes from the
2298 parent cgroup that will be available to be used by this cgroup.
2299 Otherwise, it should be a subset of "cpuset.mems" unless none of
2300 the memory nodes listed in "cpuset.mems" can be granted. In this
2301 case, it will be treated just like an empty "cpuset.mems".
2303 Its value will be affected by memory nodes hotplug events.
2305 cpuset.cpus.exclusive
2306 A read-write multiple values file which exists on non-root
2307 cpuset-enabled cgroups.
2309 It lists all the exclusive CPUs that are allowed to be used
2310 to create a new cpuset partition. Its value is not used
2311 unless the cgroup becomes a valid partition root. See the
2312 "cpuset.cpus.partition" section below for a description of what
2313 a cpuset partition is.
2315 When the cgroup becomes a partition root, the actual exclusive
2316 CPUs that are allocated to that partition are listed in
2317 "cpuset.cpus.exclusive.effective" which may be different
2318 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
2319 has previously been set, "cpuset.cpus.exclusive.effective"
2320 is always a subset of it.
2322 Users can manually set it to a value that is different from
2323 "cpuset.cpus". The only constraint in setting it is that the
2324 list of CPUs must be exclusive with respect to its sibling.
2326 For a parent cgroup, any one of its exclusive CPUs can only
2327 be distributed to at most one of its child cgroups. Having an
2328 exclusive CPU appearing in two or more of its child cgroups is
2329 not allowed (the exclusivity rule). A value that violates the
2330 exclusivity rule will be rejected with a write error.
2332 The root cgroup is a partition root and all its available CPUs
2333 are in its exclusive CPU set.
2335 cpuset.cpus.exclusive.effective
2336 A read-only multiple values file which exists on all non-root
2337 cpuset-enabled cgroups.
2339 This file shows the effective set of exclusive CPUs that
2340 can be used to create a partition root. The content of this
2341 file will always be a subset of "cpuset.cpus" and its parent's
2342 "cpuset.cpus.exclusive.effective" if its parent is not the root
2343 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2344 if it is set. If "cpuset.cpus.exclusive" is not set, it is
2345 treated to have an implicit value of "cpuset.cpus" in the
2346 formation of local partition.
2348 cpuset.cpus.isolated
2349 A read-only and root cgroup only multiple values file.
2351 This file shows the set of all isolated CPUs used in existing
2352 isolated partitions. It will be empty if no isolated partition
2355 cpuset.cpus.partition
2356 A read-write single value file which exists on non-root
2357 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2358 and is not delegatable.
2360 It accepts only the following input values when written to.
2362 ========== =====================================
2363 "member" Non-root member of a partition
2364 "root" Partition root
2365 "isolated" Partition root without load balancing
2366 ========== =====================================
2368 A cpuset partition is a collection of cpuset-enabled cgroups with
2369 a partition root at the top of the hierarchy and its descendants
2370 except those that are separate partition roots themselves and
2371 their descendants. A partition has exclusive access to the
2372 set of exclusive CPUs allocated to it. Other cgroups outside
2373 of that partition cannot use any CPUs in that set.
2375 There are two types of partitions - local and remote. A local
2376 partition is one whose parent cgroup is also a valid partition
2377 root. A remote partition is one whose parent cgroup is not a
2378 valid partition root itself. Writing to "cpuset.cpus.exclusive"
2379 is optional for the creation of a local partition as its
2380 "cpuset.cpus.exclusive" file will assume an implicit value that
2381 is the same as "cpuset.cpus" if it is not set. Writing the
2382 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2383 before the target partition root is mandatory for the creation
2384 of a remote partition.
2386 Currently, a remote partition cannot be created under a local
2387 partition. All the ancestors of a remote partition root except
2388 the root cgroup cannot be a partition root.
2390 The root cgroup is always a partition root and its state cannot
2391 be changed. All other non-root cgroups start out as "member".
2393 When set to "root", the current cgroup is the root of a new
2394 partition or scheduling domain. The set of exclusive CPUs is
2395 determined by the value of its "cpuset.cpus.exclusive.effective".
2397 When set to "isolated", the CPUs in that partition will be in
2398 an isolated state without any load balancing from the scheduler
2399 and excluded from the unbound workqueues. Tasks placed in such
2400 a partition with multiple CPUs should be carefully distributed
2401 and bound to each of the individual CPUs for optimal performance.
2403 A partition root ("root" or "isolated") can be in one of the
2404 two possible states - valid or invalid. An invalid partition
2405 root is in a degraded state where some state information may
2406 be retained, but behaves more like a "member".
2408 All possible state transitions among "member", "root" and
2409 "isolated" are allowed.
2411 On read, the "cpuset.cpus.partition" file can show the following
2414 ============================= =====================================
2415 "member" Non-root member of a partition
2416 "root" Partition root
2417 "isolated" Partition root without load balancing
2418 "root invalid (<reason>)" Invalid partition root
2419 "isolated invalid (<reason>)" Invalid isolated partition root
2420 ============================= =====================================
2422 In the case of an invalid partition root, a descriptive string on
2423 why the partition is invalid is included within parentheses.
2425 For a local partition root to be valid, the following conditions
2428 1) The parent cgroup is a valid partition root.
2429 2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2430 though it may contain offline CPUs.
2431 3) The "cpuset.cpus.effective" cannot be empty unless there is
2432 no task associated with this partition.
2434 For a remote partition root to be valid, all the above conditions
2435 except the first one must be met.
2437 External events like hotplug or changes to "cpuset.cpus" or
2438 "cpuset.cpus.exclusive" can cause a valid partition root to
2439 become invalid and vice versa. Note that a task cannot be
2440 moved to a cgroup with empty "cpuset.cpus.effective".
2442 A valid non-root parent partition may distribute out all its CPUs
2443 to its child local partitions when there is no task associated
2446 Care must be taken to change a valid partition root to "member"
2447 as all its child local partitions, if present, will become
2448 invalid causing disruption to tasks running in those child
2449 partitions. These inactivated partitions could be recovered if
2450 their parent is switched back to a partition root with a proper
2451 value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2453 Poll and inotify events are triggered whenever the state of
2454 "cpuset.cpus.partition" changes. That includes changes caused
2455 by write to "cpuset.cpus.partition", cpu hotplug or other
2456 changes that modify the validity status of the partition.
2457 This will allow user space agents to monitor unexpected changes
2458 to "cpuset.cpus.partition" without the need to do continuous
2461 A user can pre-configure certain CPUs to an isolated state
2462 with load balancing disabled at boot time with the "isolcpus"
2463 kernel boot command line option. If those CPUs are to be put
2464 into a partition, they have to be used in an isolated partition.
2470 Device controller manages access to device files. It includes both
2471 creation of new device files (using mknod), and access to the
2472 existing device files.
2474 Cgroup v2 device controller has no interface files and is implemented
2475 on top of cgroup BPF. To control access to device files, a user may
2476 create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2477 them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2478 device file, corresponding BPF programs will be executed, and depending
2479 on the return value the attempt will succeed or fail with -EPERM.
2481 A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2482 bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2483 access type (mknod/read/write) and device (type, major and minor numbers).
2484 If the program returns 0, the attempt fails with -EPERM, otherwise it
2487 An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2488 tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2494 The "rdma" controller regulates the distribution and accounting of
2497 RDMA Interface Files
2498 ~~~~~~~~~~~~~~~~~~~~
2501 A readwrite nested-keyed file that exists for all the cgroups
2502 except root that describes current configured resource limit
2503 for a RDMA/IB device.
2505 Lines are keyed by device name and are not ordered.
2506 Each line contains space separated resource name and its configured
2507 limit that can be distributed.
2509 The following nested keys are defined.
2511 ========== =============================
2512 hca_handle Maximum number of HCA Handles
2513 hca_object Maximum number of HCA Objects
2514 ========== =============================
2516 An example for mlx4 and ocrdma device follows::
2518 mlx4_0 hca_handle=2 hca_object=2000
2519 ocrdma1 hca_handle=3 hca_object=max
2522 A read-only file that describes current resource usage.
2523 It exists for all the cgroup except root.
2525 An example for mlx4 and ocrdma device follows::
2527 mlx4_0 hca_handle=1 hca_object=20
2528 ocrdma1 hca_handle=1 hca_object=23
2533 The HugeTLB controller allows to limit the HugeTLB usage per control group and
2534 enforces the controller limit during page fault.
2536 HugeTLB Interface Files
2537 ~~~~~~~~~~~~~~~~~~~~~~~
2539 hugetlb.<hugepagesize>.current
2540 Show current usage for "hugepagesize" hugetlb. It exists for all
2541 the cgroup except root.
2543 hugetlb.<hugepagesize>.max
2544 Set/show the hard limit of "hugepagesize" hugetlb usage.
2545 The default value is "max". It exists for all the cgroup except root.
2547 hugetlb.<hugepagesize>.events
2548 A read-only flat-keyed file which exists on non-root cgroups.
2551 The number of allocation failure due to HugeTLB limit
2553 hugetlb.<hugepagesize>.events.local
2554 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2555 are local to the cgroup i.e. not hierarchical. The file modified event
2556 generated on this file reflects only the local events.
2558 hugetlb.<hugepagesize>.numa_stat
2559 Similar to memory.numa_stat, it shows the numa information of the
2560 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2561 use hugetlb pages are included. The per-node values are in bytes.
2566 The Miscellaneous cgroup provides the resource limiting and tracking
2567 mechanism for the scalar resources which cannot be abstracted like the other
2568 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2571 A resource can be added to the controller via enum misc_res_type{} in the
2572 include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2573 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2574 capacity prior to using the resource by calling misc_cg_set_capacity().
2576 Once a capacity is set then the resource usage can be updated using charge and
2577 uncharge APIs. All of the APIs to interact with misc controller are in
2578 include/linux/misc_cgroup.h.
2580 Misc Interface Files
2581 ~~~~~~~~~~~~~~~~~~~~
2583 Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2586 A read-only flat-keyed file shown only in the root cgroup. It shows
2587 miscellaneous scalar resources available on the platform along with
2595 A read-only flat-keyed file shown in the all cgroups. It shows
2596 the current usage of the resources in the cgroup and its children.::
2603 A read-write flat-keyed file shown in the non root cgroups. Allowed
2604 maximum usage of the resources in the cgroup and its children.::
2610 Limit can be set by::
2612 # echo res_a 1 > misc.max
2614 Limit can be set to max by::
2616 # echo res_a max > misc.max
2618 Limits can be set higher than the capacity value in the misc.capacity
2622 A read-only flat-keyed file which exists on non-root cgroups. The
2623 following entries are defined. Unless specified otherwise, a value
2624 change in this file generates a file modified event. All fields in
2625 this file are hierarchical.
2628 The number of times the cgroup's resource usage was
2629 about to go over the max boundary.
2631 Migration and Ownership
2632 ~~~~~~~~~~~~~~~~~~~~~~~
2634 A miscellaneous scalar resource is charged to the cgroup in which it is used
2635 first, and stays charged to that cgroup until that resource is freed. Migrating
2636 a process to a different cgroup does not move the charge to the destination
2637 cgroup where the process has moved.
2645 perf_event controller, if not mounted on a legacy hierarchy, is
2646 automatically enabled on the v2 hierarchy so that perf events can
2647 always be filtered by cgroup v2 path. The controller can still be
2648 moved to a legacy hierarchy after v2 hierarchy is populated.
2651 Non-normative information
2652 -------------------------
2654 This section contains information that isn't considered to be a part of
2655 the stable kernel API and so is subject to change.
2658 CPU controller root cgroup process behaviour
2659 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2661 When distributing CPU cycles in the root cgroup each thread in this
2662 cgroup is treated as if it was hosted in a separate child cgroup of the
2663 root cgroup. This child cgroup weight is dependent on its thread nice
2666 For details of this mapping see sched_prio_to_weight array in
2667 kernel/sched/core.c file (values from this array should be scaled
2668 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2671 IO controller root cgroup process behaviour
2672 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2674 Root cgroup processes are hosted in an implicit leaf child node.
2675 When distributing IO resources this implicit child node is taken into
2676 account as if it was a normal child cgroup of the root cgroup with a
2677 weight value of 200.
2686 cgroup namespace provides a mechanism to virtualize the view of the
2687 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2688 flag can be used with clone(2) and unshare(2) to create a new cgroup
2689 namespace. The process running inside the cgroup namespace will have
2690 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2691 cgroupns root is the cgroup of the process at the time of creation of
2692 the cgroup namespace.
2694 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2695 complete path of the cgroup of a process. In a container setup where
2696 a set of cgroups and namespaces are intended to isolate processes the
2697 "/proc/$PID/cgroup" file may leak potential system level information
2698 to the isolated processes. For example::
2700 # cat /proc/self/cgroup
2701 0::/batchjobs/container_id1
2703 The path '/batchjobs/container_id1' can be considered as system-data
2704 and undesirable to expose to the isolated processes. cgroup namespace
2705 can be used to restrict visibility of this path. For example, before
2706 creating a cgroup namespace, one would see::
2708 # ls -l /proc/self/ns/cgroup
2709 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2710 # cat /proc/self/cgroup
2711 0::/batchjobs/container_id1
2713 After unsharing a new namespace, the view changes::
2715 # ls -l /proc/self/ns/cgroup
2716 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2717 # cat /proc/self/cgroup
2720 When some thread from a multi-threaded process unshares its cgroup
2721 namespace, the new cgroupns gets applied to the entire process (all
2722 the threads). This is natural for the v2 hierarchy; however, for the
2723 legacy hierarchies, this may be unexpected.
2725 A cgroup namespace is alive as long as there are processes inside or
2726 mounts pinning it. When the last usage goes away, the cgroup
2727 namespace is destroyed. The cgroupns root and the actual cgroups
2734 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2735 process calling unshare(2) is running. For example, if a process in
2736 /batchjobs/container_id1 cgroup calls unshare, cgroup
2737 /batchjobs/container_id1 becomes the cgroupns root. For the
2738 init_cgroup_ns, this is the real root ('/') cgroup.
2740 The cgroupns root cgroup does not change even if the namespace creator
2741 process later moves to a different cgroup::
2743 # ~/unshare -c # unshare cgroupns in some cgroup
2744 # cat /proc/self/cgroup
2747 # echo 0 > sub_cgrp_1/cgroup.procs
2748 # cat /proc/self/cgroup
2751 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2753 Processes running inside the cgroup namespace will be able to see
2754 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2755 From within an unshared cgroupns::
2759 # echo 7353 > sub_cgrp_1/cgroup.procs
2760 # cat /proc/7353/cgroup
2763 From the initial cgroup namespace, the real cgroup path will be
2766 $ cat /proc/7353/cgroup
2767 0::/batchjobs/container_id1/sub_cgrp_1
2769 From a sibling cgroup namespace (that is, a namespace rooted at a
2770 different cgroup), the cgroup path relative to its own cgroup
2771 namespace root will be shown. For instance, if PID 7353's cgroup
2772 namespace root is at '/batchjobs/container_id2', then it will see::
2774 # cat /proc/7353/cgroup
2775 0::/../container_id2/sub_cgrp_1
2777 Note that the relative path always starts with '/' to indicate that
2778 its relative to the cgroup namespace root of the caller.
2781 Migration and setns(2)
2782 ----------------------
2784 Processes inside a cgroup namespace can move into and out of the
2785 namespace root if they have proper access to external cgroups. For
2786 example, from inside a namespace with cgroupns root at
2787 /batchjobs/container_id1, and assuming that the global hierarchy is
2788 still accessible inside cgroupns::
2790 # cat /proc/7353/cgroup
2792 # echo 7353 > batchjobs/container_id2/cgroup.procs
2793 # cat /proc/7353/cgroup
2794 0::/../container_id2
2796 Note that this kind of setup is not encouraged. A task inside cgroup
2797 namespace should only be exposed to its own cgroupns hierarchy.
2799 setns(2) to another cgroup namespace is allowed when:
2801 (a) the process has CAP_SYS_ADMIN against its current user namespace
2802 (b) the process has CAP_SYS_ADMIN against the target cgroup
2805 No implicit cgroup changes happen with attaching to another cgroup
2806 namespace. It is expected that the someone moves the attaching
2807 process under the target cgroup namespace root.
2810 Interaction with Other Namespaces
2811 ---------------------------------
2813 Namespace specific cgroup hierarchy can be mounted by a process
2814 running inside a non-init cgroup namespace::
2816 # mount -t cgroup2 none $MOUNT_POINT
2818 This will mount the unified cgroup hierarchy with cgroupns root as the
2819 filesystem root. The process needs CAP_SYS_ADMIN against its user and
2822 The virtualization of /proc/self/cgroup file combined with restricting
2823 the view of cgroup hierarchy by namespace-private cgroupfs mount
2824 provides a properly isolated cgroup view inside the container.
2827 Information on Kernel Programming
2828 =================================
2830 This section contains kernel programming information in the areas
2831 where interacting with cgroup is necessary. cgroup core and
2832 controllers are not covered.
2835 Filesystem Support for Writeback
2836 --------------------------------
2838 A filesystem can support cgroup writeback by updating
2839 address_space_operations->writepage[s]() to annotate bio's using the
2840 following two functions.
2842 wbc_init_bio(@wbc, @bio)
2843 Should be called for each bio carrying writeback data and
2844 associates the bio with the inode's owner cgroup and the
2845 corresponding request queue. This must be called after
2846 a queue (device) has been associated with the bio and
2849 wbc_account_cgroup_owner(@wbc, @page, @bytes)
2850 Should be called for each data segment being written out.
2851 While this function doesn't care exactly when it's called
2852 during the writeback session, it's the easiest and most
2853 natural to call it as data segments are added to a bio.
2855 With writeback bio's annotated, cgroup support can be enabled per
2856 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2857 selective disabling of cgroup writeback support which is helpful when
2858 certain filesystem features, e.g. journaled data mode, are
2861 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2862 the configuration, the bio may be executed at a lower priority and if
2863 the writeback session is holding shared resources, e.g. a journal
2864 entry, may lead to priority inversion. There is no one easy solution
2865 for the problem. Filesystems can try to work around specific problem
2866 cases by skipping wbc_init_bio() and using bio_associate_blkg()
2870 Deprecated v1 Core Features
2871 ===========================
2873 - Multiple hierarchies including named ones are not supported.
2875 - All v1 mount options are not supported.
2877 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2879 - "cgroup.clone_children" is removed.
2881 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2882 at the root instead.
2885 Issues with v1 and Rationales for v2
2886 ====================================
2888 Multiple Hierarchies
2889 --------------------
2891 cgroup v1 allowed an arbitrary number of hierarchies and each
2892 hierarchy could host any number of controllers. While this seemed to
2893 provide a high level of flexibility, it wasn't useful in practice.
2895 For example, as there is only one instance of each controller, utility
2896 type controllers such as freezer which can be useful in all
2897 hierarchies could only be used in one. The issue is exacerbated by
2898 the fact that controllers couldn't be moved to another hierarchy once
2899 hierarchies were populated. Another issue was that all controllers
2900 bound to a hierarchy were forced to have exactly the same view of the
2901 hierarchy. It wasn't possible to vary the granularity depending on
2902 the specific controller.
2904 In practice, these issues heavily limited which controllers could be
2905 put on the same hierarchy and most configurations resorted to putting
2906 each controller on its own hierarchy. Only closely related ones, such
2907 as the cpu and cpuacct controllers, made sense to be put on the same
2908 hierarchy. This often meant that userland ended up managing multiple
2909 similar hierarchies repeating the same steps on each hierarchy
2910 whenever a hierarchy management operation was necessary.
2912 Furthermore, support for multiple hierarchies came at a steep cost.
2913 It greatly complicated cgroup core implementation but more importantly
2914 the support for multiple hierarchies restricted how cgroup could be
2915 used in general and what controllers was able to do.
2917 There was no limit on how many hierarchies there might be, which meant
2918 that a thread's cgroup membership couldn't be described in finite
2919 length. The key might contain any number of entries and was unlimited
2920 in length, which made it highly awkward to manipulate and led to
2921 addition of controllers which existed only to identify membership,
2922 which in turn exacerbated the original problem of proliferating number
2925 Also, as a controller couldn't have any expectation regarding the
2926 topologies of hierarchies other controllers might be on, each
2927 controller had to assume that all other controllers were attached to
2928 completely orthogonal hierarchies. This made it impossible, or at
2929 least very cumbersome, for controllers to cooperate with each other.
2931 In most use cases, putting controllers on hierarchies which are
2932 completely orthogonal to each other isn't necessary. What usually is
2933 called for is the ability to have differing levels of granularity
2934 depending on the specific controller. In other words, hierarchy may
2935 be collapsed from leaf towards root when viewed from specific
2936 controllers. For example, a given configuration might not care about
2937 how memory is distributed beyond a certain level while still wanting
2938 to control how CPU cycles are distributed.
2944 cgroup v1 allowed threads of a process to belong to different cgroups.
2945 This didn't make sense for some controllers and those controllers
2946 ended up implementing different ways to ignore such situations but
2947 much more importantly it blurred the line between API exposed to
2948 individual applications and system management interface.
2950 Generally, in-process knowledge is available only to the process
2951 itself; thus, unlike service-level organization of processes,
2952 categorizing threads of a process requires active participation from
2953 the application which owns the target process.
2955 cgroup v1 had an ambiguously defined delegation model which got abused
2956 in combination with thread granularity. cgroups were delegated to
2957 individual applications so that they can create and manage their own
2958 sub-hierarchies and control resource distributions along them. This
2959 effectively raised cgroup to the status of a syscall-like API exposed
2962 First of all, cgroup has a fundamentally inadequate interface to be
2963 exposed this way. For a process to access its own knobs, it has to
2964 extract the path on the target hierarchy from /proc/self/cgroup,
2965 construct the path by appending the name of the knob to the path, open
2966 and then read and/or write to it. This is not only extremely clunky
2967 and unusual but also inherently racy. There is no conventional way to
2968 define transaction across the required steps and nothing can guarantee
2969 that the process would actually be operating on its own sub-hierarchy.
2971 cgroup controllers implemented a number of knobs which would never be
2972 accepted as public APIs because they were just adding control knobs to
2973 system-management pseudo filesystem. cgroup ended up with interface
2974 knobs which were not properly abstracted or refined and directly
2975 revealed kernel internal details. These knobs got exposed to
2976 individual applications through the ill-defined delegation mechanism
2977 effectively abusing cgroup as a shortcut to implementing public APIs
2978 without going through the required scrutiny.
2980 This was painful for both userland and kernel. Userland ended up with
2981 misbehaving and poorly abstracted interfaces and kernel exposing and
2982 locked into constructs inadvertently.
2985 Competition Between Inner Nodes and Threads
2986 -------------------------------------------
2988 cgroup v1 allowed threads to be in any cgroups which created an
2989 interesting problem where threads belonging to a parent cgroup and its
2990 children cgroups competed for resources. This was nasty as two
2991 different types of entities competed and there was no obvious way to
2992 settle it. Different controllers did different things.
2994 The cpu controller considered threads and cgroups as equivalents and
2995 mapped nice levels to cgroup weights. This worked for some cases but
2996 fell flat when children wanted to be allocated specific ratios of CPU
2997 cycles and the number of internal threads fluctuated - the ratios
2998 constantly changed as the number of competing entities fluctuated.
2999 There also were other issues. The mapping from nice level to weight
3000 wasn't obvious or universal, and there were various other knobs which
3001 simply weren't available for threads.
3003 The io controller implicitly created a hidden leaf node for each
3004 cgroup to host the threads. The hidden leaf had its own copies of all
3005 the knobs with ``leaf_`` prefixed. While this allowed equivalent
3006 control over internal threads, it was with serious drawbacks. It
3007 always added an extra layer of nesting which wouldn't be necessary
3008 otherwise, made the interface messy and significantly complicated the
3011 The memory controller didn't have a way to control what happened
3012 between internal tasks and child cgroups and the behavior was not
3013 clearly defined. There were attempts to add ad-hoc behaviors and
3014 knobs to tailor the behavior to specific workloads which would have
3015 led to problems extremely difficult to resolve in the long term.
3017 Multiple controllers struggled with internal tasks and came up with
3018 different ways to deal with it; unfortunately, all the approaches were
3019 severely flawed and, furthermore, the widely different behaviors
3020 made cgroup as a whole highly inconsistent.
3022 This clearly is a problem which needs to be addressed from cgroup core
3026 Other Interface Issues
3027 ----------------------
3029 cgroup v1 grew without oversight and developed a large number of
3030 idiosyncrasies and inconsistencies. One issue on the cgroup core side
3031 was how an empty cgroup was notified - a userland helper binary was
3032 forked and executed for each event. The event delivery wasn't
3033 recursive or delegatable. The limitations of the mechanism also led
3034 to in-kernel event delivery filtering mechanism further complicating
3037 Controller interfaces were problematic too. An extreme example is
3038 controllers completely ignoring hierarchical organization and treating
3039 all cgroups as if they were all located directly under the root
3040 cgroup. Some controllers exposed a large amount of inconsistent
3041 implementation details to userland.
3043 There also was no consistency across controllers. When a new cgroup
3044 was created, some controllers defaulted to not imposing extra
3045 restrictions while others disallowed any resource usage until
3046 explicitly configured. Configuration knobs for the same type of
3047 control used widely differing naming schemes and formats. Statistics
3048 and information knobs were named arbitrarily and used different
3049 formats and units even in the same controller.
3051 cgroup v2 establishes common conventions where appropriate and updates
3052 controllers so that they expose minimal and consistent interfaces.
3055 Controller Issues and Remedies
3056 ------------------------------
3061 The original lower boundary, the soft limit, is defined as a limit
3062 that is per default unset. As a result, the set of cgroups that
3063 global reclaim prefers is opt-in, rather than opt-out. The costs for
3064 optimizing these mostly negative lookups are so high that the
3065 implementation, despite its enormous size, does not even provide the
3066 basic desirable behavior. First off, the soft limit has no
3067 hierarchical meaning. All configured groups are organized in a global
3068 rbtree and treated like equal peers, regardless where they are located
3069 in the hierarchy. This makes subtree delegation impossible. Second,
3070 the soft limit reclaim pass is so aggressive that it not just
3071 introduces high allocation latencies into the system, but also impacts
3072 system performance due to overreclaim, to the point where the feature
3073 becomes self-defeating.
3075 The memory.low boundary on the other hand is a top-down allocated
3076 reserve. A cgroup enjoys reclaim protection when it's within its
3077 effective low, which makes delegation of subtrees possible. It also
3078 enjoys having reclaim pressure proportional to its overage when
3079 above its effective low.
3081 The original high boundary, the hard limit, is defined as a strict
3082 limit that can not budge, even if the OOM killer has to be called.
3083 But this generally goes against the goal of making the most out of the
3084 available memory. The memory consumption of workloads varies during
3085 runtime, and that requires users to overcommit. But doing that with a
3086 strict upper limit requires either a fairly accurate prediction of the
3087 working set size or adding slack to the limit. Since working set size
3088 estimation is hard and error prone, and getting it wrong results in
3089 OOM kills, most users tend to err on the side of a looser limit and
3090 end up wasting precious resources.
3092 The memory.high boundary on the other hand can be set much more
3093 conservatively. When hit, it throttles allocations by forcing them
3094 into direct reclaim to work off the excess, but it never invokes the
3095 OOM killer. As a result, a high boundary that is chosen too
3096 aggressively will not terminate the processes, but instead it will
3097 lead to gradual performance degradation. The user can monitor this
3098 and make corrections until the minimal memory footprint that still
3099 gives acceptable performance is found.
3101 In extreme cases, with many concurrent allocations and a complete
3102 breakdown of reclaim progress within the group, the high boundary can
3103 be exceeded. But even then it's mostly better to satisfy the
3104 allocation from the slack available in other groups or the rest of the
3105 system than killing the group. Otherwise, memory.max is there to
3106 limit this type of spillover and ultimately contain buggy or even
3107 malicious applications.
3109 Setting the original memory.limit_in_bytes below the current usage was
3110 subject to a race condition, where concurrent charges could cause the
3111 limit setting to fail. memory.max on the other hand will first set the
3112 limit to prevent new charges, and then reclaim and OOM kill until the
3113 new limit is met - or the task writing to memory.max is killed.
3115 The combined memory+swap accounting and limiting is replaced by real
3116 control over swap space.
3118 The main argument for a combined memory+swap facility in the original
3119 cgroup design was that global or parental pressure would always be
3120 able to swap all anonymous memory of a child group, regardless of the
3121 child's own (possibly untrusted) configuration. However, untrusted
3122 groups can sabotage swapping by other means - such as referencing its
3123 anonymous memory in a tight loop - and an admin can not assume full
3124 swappability when overcommitting untrusted jobs.
3126 For trusted jobs, on the other hand, a combined counter is not an
3127 intuitive userspace interface, and it flies in the face of the idea
3128 that cgroup controllers should account and limit specific physical
3129 resources. Swap space is a resource like all others in the system,
3130 and that's why unified hierarchy allows distributing it separately.