From: Jason Gunthorpe Date: Tue, 2 Jul 2019 18:07:52 +0000 (-0300) Subject: Merge branch 'hmm-devmem-cleanup.4' into rdma.git hmm X-Git-Tag: microblaze-v5.4-rc1~289^2 X-Git-Url: http://git.monstr.eu/?a=commitdiff_plain;h=cc5dfd59e375f4d0f2b64643723d16b38b2f2d78;p=linux-2.6-microblaze.git Merge branch 'hmm-devmem-cleanup.4' into rdma.git hmm Christoph Hellwig says: ==================== Below is a series that cleans up the dev_pagemap interface so that it is more easily usable, which removes the need to wrap it in hmm and thus allowing to kill a lot of code Changes since v3: - pull in "mm/swap: Fix release_pages() when releasing devmap pages" and rebase the other patches on top of that - fold the hmm_devmem_add_resource into the DEVICE_PUBLIC memory removal patch - remove _vm_normal_page as it isn't needed without DEVICE_PUBLIC memory - pick up various ACKs Changes since v2: - fix nvdimm kunit build - add a new memory type for device dax - fix a few issues in intermediate patches that didn't show up in the end result - incorporate feedback from Michal Hocko, including killing of the DEVICE_PUBLIC memory type entirely Changes since v1: - rebase - also switch p2pdma to the internal refcount - add type checking for pgmap->type - rename the migrate method to migrate_to_ram - cleanup the altmap_valid flag - various tidbits from the reviews ==================== Conflicts resolved by: - Keeping Ira's version of the code in swap.c - Using the delete for the section in hmm.rst - Using the delete for the devmap code in hmm.c and .h * branch 'hmm-devmem-cleanup.4': (24 commits) mm: don't select MIGRATE_VMA_HELPER from HMM_MIRROR mm: remove the HMM config option mm: sort out the DEVICE_PRIVATE Kconfig mess mm: simplify ZONE_DEVICE page private data mm: remove hmm_devmem_add mm: remove hmm_vma_alloc_locked_page nouveau: use devm_memremap_pages directly nouveau: use alloc_page_vma directly PCI/P2PDMA: use the dev_pagemap internal refcount device-dax: use the dev_pagemap internal refcount memremap: provide an optional internal refcount in struct dev_pagemap memremap: replace the altmap_valid field with a PGMAP_ALTMAP_VALID flag memremap: remove the data field in struct dev_pagemap memremap: add a migrate_to_ram method to struct dev_pagemap_ops memremap: lift the devmap_enable manipulation into devm_memremap_pages memremap: pass a struct dev_pagemap to ->kill and ->cleanup memremap: move dev_pagemap callbacks into a separate structure memremap: validate the pagemap type passed to devm_memremap_pages mm: factor out a devm_request_free_mem_region helper mm: export alloc_pages_vma ... Signed-off-by: Jason Gunthorpe --- cc5dfd59e375f4d0f2b64643723d16b38b2f2d78 diff --cc Documentation/vm/hmm.rst index 7b6eeda5a7c0,50e1380950a9..7d90964abbb0 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@@ -336,33 -329,7 +336,6 @@@ directly using struct page for device m unaware of the difference. We only need to make sure that no one ever tries to map those pages from the CPU side. - HMM provides a set of helpers to register and hotplug device memory as a new - region needing a struct page. This is offered through a very simple API:: - - struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, - struct device *device, - unsigned long size); - void hmm_devmem_remove(struct hmm_devmem *devmem); - - The hmm_devmem_ops is where most of the important things are:: - - struct hmm_devmem_ops { - void (*free)(struct hmm_devmem *devmem, struct page *page); - vm_fault_t (*fault)(struct hmm_devmem *devmem, - struct vm_area_struct *vma, - unsigned long addr, - struct page *page, - unsigned flags, - pmd_t *pmdp); - }; - - The first callback (free()) happens when the last reference on a device page is - dropped. This means the device page is now free and no longer used by anyone. - The second callback happens whenever the CPU tries to access a device page - which it cannot do. This second callback must trigger a migration back to - system memory. - -- Migration to and from device memory =================================== diff --cc mm/hmm.c index c1bdcef403ee,d62ce64d6bca..d48b9283725a --- a/mm/hmm.c +++ b/mm/hmm.c @@@ -26,11 -25,18 +26,8 @@@ #include #include - #define PA_SECTION_SIZE (1UL << PA_SECTION_SHIFT) - - #if IS_ENABLED(CONFIG_HMM_MIRROR) static const struct mmu_notifier_ops hmm_mmu_notifier_ops; -static inline struct hmm *mm_get_hmm(struct mm_struct *mm) -{ - struct hmm *hmm = READ_ONCE(mm->hmm); - - if (hmm && kref_get_unless_zero(&hmm->kref)) - return hmm; - - return NULL; -} - /** * hmm_get_or_create - register HMM against an mm (HMM internal) *