arm64: mm: reserve CMA and crashkernel in ZONE_DMA32
authorNicolas Saenz Julienne <nsaenzjulienne@suse.de>
Thu, 7 Nov 2019 09:56:11 +0000 (10:56 +0100)
committerCatalin Marinas <catalin.marinas@arm.com>
Thu, 7 Nov 2019 11:22:20 +0000 (11:22 +0000)
With the introduction of ZONE_DMA in arm64 we moved the default CMA and
crashkernel reservation into that area. This caused a regression on big
machines that need big CMA and crashkernel reservations. Note that
ZONE_DMA is only 1GB big.

Restore the previous behavior as the wide majority of devices are OK
with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA
will configure it explicitly.

Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32")
Reported-by: Qian Cai <cai@lca.pw>
Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch/arm64/mm/init.c

index 35f27b8..d933589 100644 (file)
@@ -91,7 +91,7 @@ static void __init reserve_crashkernel(void)
 
        if (crash_base == 0) {
                /* Current arm64 boot protocol requires 2MB alignment */
-               crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT,
+               crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
                                crash_size, SZ_2M);
                if (crash_base == 0) {
                        pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
@@ -459,7 +459,7 @@ void __init arm64_memblock_init(void)
 
        high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
 
-       dma_contiguous_reserve(arm64_dma_phys_limit ? : arm64_dma32_phys_limit);
+       dma_contiguous_reserve(arm64_dma32_phys_limit);
 }
 
 void __init bootmem_init(void)