unaware of the difference. We only need to make sure that no one ever tries to
map those pages from the CPU side.
- HMM provides a set of helpers to register and hotplug device memory as a new
- region needing a struct page. This is offered through a very simple API::
-
- struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
- struct device *device,
- unsigned long size);
- void hmm_devmem_remove(struct hmm_devmem *devmem);
-
- The hmm_devmem_ops is where most of the important things are::
-
- struct hmm_devmem_ops {
- void (*free)(struct hmm_devmem *devmem, struct page *page);
- vm_fault_t (*fault)(struct hmm_devmem *devmem,
- struct vm_area_struct *vma,
- unsigned long addr,
- struct page *page,
- unsigned flags,
- pmd_t *pmdp);
- };
-
- The first callback (free()) happens when the last reference on a device page is
- dropped. This means the device page is now free and no longer used by anyone.
- The second callback happens whenever the CPU tries to access a device page
- which it cannot do. This second callback must trigger a migration back to
- system memory.
-
--
Migration to and from device memory
===================================
#include <linux/mmu_notifier.h>
#include <linux/memory_hotplug.h>
- #define PA_SECTION_SIZE (1UL << PA_SECTION_SHIFT)
-
- #if IS_ENABLED(CONFIG_HMM_MIRROR)
static const struct mmu_notifier_ops hmm_mmu_notifier_ops;
-static inline struct hmm *mm_get_hmm(struct mm_struct *mm)
-{
- struct hmm *hmm = READ_ONCE(mm->hmm);
-
- if (hmm && kref_get_unless_zero(&hmm->kref))
- return hmm;
-
- return NULL;
-}
-
/**
* hmm_get_or_create - register HMM against an mm (HMM internal)
*