powerpc/fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
authorRitesh Harjani (IBM) <ritesh.list@gmail.com>
Fri, 18 Oct 2024 16:17:56 +0000 (21:47 +0530)
committerMichael Ellerman <mpe@ellerman.id.au>
Mon, 21 Oct 2024 04:26:50 +0000 (15:26 +1100)
commit6faeac507beb2935d9171a01c3877b0505689c58
tree1b873a3f968676b785deb091241ddbb96de6d25a
parentadfaec30ffaceecd565e06adae367aa944acc3c9
powerpc/fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem

This patch refactors all CMA related initialization and alignment code
to within fadump_cma_init() which gets called in the end. This also means
that we keep [reserve_dump_area_start, boot_memory_size] page aligned
during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the
aligned chunk and provide it to CMA. This inherently also fixes an issue in
the current code where the reserve_dump_area_start is not aligned
when the physical memory can have holes and the suitable chunk starts at
an unaligned boundary.

After this we should be able to call fadump_cma_init() independently
later in setup_arch() where pageblock_order is non-zero.

Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Acked-by: Hari Bathini <hbathini@linux.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://patch.msgid.link/805d6b900968fb9402ad8f4e4775597db42085c4.1729146153.git.ritesh.list@gmail.com
arch/powerpc/kernel/fadump.c