1 # SPDX-License-Identifier: GPL-2.0-only
2 menu "Xen driver support"
6 bool "Xen memory balloon driver"
9 The balloon driver allows the Xen domain to request more memory from
10 the system to expand the domain's memory allocation, or alternatively
11 return unneeded memory to the system.
13 config XEN_BALLOON_MEMORY_HOTPLUG
14 bool "Memory hotplug support for Xen balloon driver"
15 depends on XEN_BALLOON && MEMORY_HOTPLUG
18 Memory hotplug support for Xen balloon driver allows expanding memory
19 available for the system above limit declared at system startup.
20 It is very useful on critical systems which require long
21 run without rebooting.
23 It's also very useful for non PV domains to obtain unpopulated physical
24 memory ranges to use in order to map foreign memory or grants.
26 Memory could be hotplugged in following steps:
28 1) target domain: ensure that memory auto online policy is in
29 effect by checking /sys/devices/system/memory/auto_online_blocks
30 file (should be 'online').
32 2) control domain: xl mem-max <target-domain> <maxmem>
33 where <maxmem> is >= requested memory size,
35 3) control domain: xl mem-set <target-domain> <memory>
36 where <memory> is requested memory size; alternatively memory
37 could be added by writing proper value to
38 /sys/devices/system/xen_memory/xen_memory0/target or
39 /sys/devices/system/xen_memory/xen_memory0/target_kb on the
42 Alternatively, if memory auto onlining was not requested at step 1
43 the newly added memory can be manually onlined in the target domain
44 by doing the following:
46 for i in /sys/devices/system/memory/memory*/state; do \
47 [ "`cat "$i"`" = offline ] && echo online > "$i"; done
49 or by adding the following line to udev rules:
51 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
53 config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
54 int "Hotplugged memory limit (in GiB) for a PV guest"
58 depends on XEN_HAVE_PVMMU
59 depends on XEN_BALLOON_MEMORY_HOTPLUG
61 Maxmium amount of memory (in GiB) that a PV guest can be
62 expanded to when using memory hotplug.
64 A PV guest can have more memory than this limit if is
65 started with a larger maximum.
67 This value is used to allocate enough space in internal
68 tables needed for physical memory administration.
70 config XEN_SCRUB_PAGES_DEFAULT
71 bool "Scrub pages before returning them to system by default"
72 depends on XEN_BALLOON
75 Scrub pages before returning them to the system for reuse by
76 other domains. This makes sure that any confidential data
77 is not accidentally visible to other domains. It is more
78 secure, but slightly less efficient. This can be controlled with
79 xen_scrub_pages=0 parameter and
80 /sys/devices/system/xen_memory/xen_memory0/scrub_pages.
81 This option only sets the default value.
86 tristate "Xen /dev/xen/evtchn device"
89 The evtchn driver allows a userspace process to trigger event
90 channels and to receive notification of an event channel
95 bool "Backend driver support"
98 Support for backend device drivers that provide I/O services
99 to other virtual machines.
102 tristate "Xen filesystem"
106 The xen filesystem provides a way for domains to share
107 information with each other and with the hypervisor.
108 For example, by reading and writing the "xenbus" file, guests
109 may pass arbitrary information to the initial domain.
110 If in doubt, say yes.
112 config XEN_COMPAT_XENFS
113 bool "Create compatibility mount point /proc/xen"
117 The old xenstore userspace tools expect to find "xenbus"
118 under /proc/xen, but "xenbus" is now found at the root of the
119 xenfs filesystem. Selecting this causes the kernel to create
120 the compatibility mount point /proc/xen if it is running on
122 If in doubt, say yes.
124 config XEN_SYS_HYPERVISOR
125 bool "Create xen entries under /sys/hypervisor"
127 select SYS_HYPERVISOR
130 Create entries under /sys/hypervisor describing the Xen
131 hypervisor environment. When running native or in another
132 virtual environment, /sys/hypervisor will still be present,
133 but will have no xen contents.
135 config XEN_XENBUS_FRONTEND
139 tristate "userspace grant access device driver"
144 Allows userspace processes to use grants.
146 config XEN_GNTDEV_DMABUF
147 bool "Add support for dma-buf grant access device driver extension"
148 depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC
149 select DMA_SHARED_BUFFER
151 Allows userspace processes and kernel modules to use Xen backed
152 dma-buf implementation. With this extension grant references to
153 the pages of an imported dma-buf can be exported for other domain
154 use and grant references coming from a foreign domain can be
155 converted into a local dma-buf for local export.
157 config XEN_GRANT_DEV_ALLOC
158 tristate "User-space grant reference allocator driver"
162 Allows userspace processes to create pages with access granted
163 to other domains. This can be used to implement frontend drivers
164 or as part of an inter-domain shared memory channel.
166 config XEN_GRANT_DMA_ALLOC
167 bool "Allow allocating DMA capable buffers with grant reference module"
168 depends on XEN && HAS_DMA
170 Extends grant table module API to allow allocating DMA capable
171 buffers and mapping foreign grant references on top of it.
172 The resulting buffer is similar to one allocated by the balloon
173 driver in that proper memory reservation is made by
174 ({increase|decrease}_reservation and VA mappings are updated if
176 This is useful for sharing foreign buffers with HW drivers which
177 cannot work with scattered buffers provided by the balloon driver,
178 but require DMAable memory instead.
184 config XEN_PCIDEV_BACKEND
185 tristate "Xen PCI-device backend driver"
186 depends on PCI && X86 && XEN
187 depends on XEN_BACKEND
190 The PCI device backend driver allows the kernel to export arbitrary
191 PCI devices to other guests. If you select this to be a module, you
192 will need to make sure no other driver has bound to the device(s)
193 you want to make visible to other guests.
195 The parameter "passthrough" allows you specify how you want the PCI
196 devices to appear in the guest. You can choose the default (0) where
197 PCI topology starts at 00.00.0, or (1) for passthrough if you want
198 the PCI devices topology appear the same as in the host.
200 The "hide" parameter (only applicable if backend driver is compiled
201 into the kernel) allows you to bind the PCI devices to this module
202 from the default device drivers. The argument is the list of PCI BDFs:
203 xen-pciback.hide=(03:00.0)(04:00.0)
207 config XEN_PVCALLS_FRONTEND
208 tristate "XEN PV Calls frontend driver"
209 depends on INET && XEN
210 select XEN_XENBUS_FRONTEND
212 Experimental frontend for the Xen PV Calls protocol
213 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
214 sends a small set of POSIX calls to the backend, which
217 config XEN_PVCALLS_BACKEND
218 bool "XEN PV Calls backend driver"
219 depends on INET && XEN && XEN_BACKEND
221 Experimental backend for the Xen PV Calls protocol
222 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
223 allows PV Calls frontends to send POSIX calls to the backend,
224 which implements them.
228 config XEN_SCSI_BACKEND
229 tristate "XEN SCSI backend driver"
230 depends on XEN && XEN_BACKEND && TARGET_CORE
232 The SCSI backend driver allows the kernel to export its SCSI Devices
233 to other guests via a high-performance shared-memory interface.
234 Only needed for systems running as XEN driver domains (e.g. Dom0) and
235 if guests need generic access to SCSI devices.
243 bool "Xen stub drivers"
244 depends on XEN && X86_64 && BROKEN
246 Allow kernel to install stub drivers, to reserve space for Xen drivers,
247 i.e. memory hotplug and cpu hotplug, and to block native drivers loaded,
248 so that real Xen drivers can be modular.
250 To enable Xen features like cpu and memory hotplug, select Y here.
252 config XEN_ACPI_HOTPLUG_MEMORY
253 tristate "Xen ACPI memory hotplug"
254 depends on XEN_DOM0 && XEN_STUB && ACPI
256 This is Xen ACPI memory hotplug.
258 Currently Xen only support ACPI memory hot-add. If you want
259 to hot-add memory at runtime (the hot-added memory cannot be
260 removed until machine stop), select Y/M here, otherwise select N.
262 config XEN_ACPI_HOTPLUG_CPU
263 tristate "Xen ACPI cpu hotplug"
264 depends on XEN_DOM0 && XEN_STUB && ACPI
265 select ACPI_CONTAINER
267 Xen ACPI cpu enumerating and hotplugging
269 For hotplugging, currently Xen only support ACPI cpu hotadd.
270 If you want to hotadd cpu at runtime (the hotadded cpu cannot
271 be removed until machine stop), select Y/M here.
273 config XEN_ACPI_PROCESSOR
274 tristate "Xen ACPI processor"
275 depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
278 This ACPI processor uploads Power Management information to the Xen
281 To do that the driver parses the Power Management data and uploads
282 said information to the Xen hypervisor. Then the Xen hypervisor can
283 select the proper Cx and Pxx states. It also registers itself as the
284 SMM so that other drivers (such as ACPI cpufreq scaling driver) will
287 To compile this driver as a module, choose M here: the module will be
288 called xen_acpi_processor If you do not know what to choose, select
289 M here. If the CPUFREQ drivers are built in, select Y here.
292 bool "Xen platform mcelog"
293 depends on XEN_DOM0 && X86_MCE
295 Allow kernel fetching MCE error from Xen platform and
296 converting it into Linux mcelog format for mcelog tools
298 config XEN_HAVE_PVMMU
303 depends on (ARM || ARM64 || X86_64) && EFI
305 config XEN_AUTO_XLATE
307 depends on ARM || ARM64 || XEN_PVHVM
309 Support for auto-translated physmap guests.
313 depends on X86 && ACPI
317 depends on X86 && XEN_DOM0 && XENFS
318 default y if KALLSYMS
320 Exports hypervisor symbols (along with their types and addresses) via
321 /proc/xen/xensyms file, similar to /proc/kallsyms
326 config XEN_FRONT_PGDIR_SHBUF