summaryrefslogtreecommitdiff
path: root/arch/arm64/mm
AgeCommit message (Collapse)AuthorFilesLines
2014-04-08Merge tag 'arm64-upstream' of ↵Linus Torvalds2-15/+34
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull second set of arm64 updates from Catalin Marinas: "A second pull request for this merging window, mainly with fixes and docs clarification: - Documentation clarification on CPU topology and booting requirements - Additional cache flushing during boot (needed in the presence of external caches or under virtualisation) - DMA range invalidation fix for non cache line aligned buffers - Build failure fix with !COMPAT - Kconfig update for STRICT_DEVMEM" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Fix DMA range invalidation for cache line unaligned buffers arm64: Add missing Kconfig for CONFIG_STRICT_DEVMEM arm64: fix !CONFIG_COMPAT build failures Revert "arm64: virt: ensure visibility of __boot_cpu_mode" arm64: Relax the kernel cache requirements for boot arm64: Update the TCR_EL1 translation granule definitions for 16K pages ARM: topology: Make it clear that all CPUs need to be described
2014-04-08arm64: Fix DMA range invalidation for cache line unaligned buffersCatalin Marinas1-4/+11
If the buffer needing cache invalidation for inbound DMA does start or end on a cache line aligned address, we need to use the non-destructive clean&invalidate operation. This issue was introduced by commit 7363590d2c46 (arm64: Implement coherent DMA API based on swiotlb). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
2014-04-07arm64: add early_ioremap supportMark Salter2-41/+85
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-07arm64: initialize pgprot info earlier in bootMark Salter1-2/+1
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The new fixmap and early_ioremap support also needs to use these macros before paging_init() is called. This patch moves the init_mem_pgprot() call out of paging_init() and into setup_arch() so that pgprot_default gets initialized in time for fixmap and early_ioremap. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-05arm64: Relax the kernel cache requirements for bootCatalin Marinas1-0/+9
With system caches for the host OS or architected caches for guest OS we cannot easily guarantee that there are no dirty or stale cache lines for the areas of memory written by the kernel during boot with the MMU off (therefore non-cacheable accesses). This patch adds the necessary cache maintenance during boot and relaxes the booting requirements. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-03arm64: Update the TCR_EL1 translation granule definitions for 16K pagesCatalin Marinas1-11/+14
The current TCR register setting in arch/arm64/mm/proc.S assumes that TCR_EL1.TG* fields are one bit wide and bit 31 is RES1 (reserved, set to 1). With the addition of 16K pages (currently unsupported in the kernel), the TCR_EL1.TG* fields have been extended to two bits. This patch updates the corresponding Linux definitions and drops the bit 31 setting in proc.S in favour of the new macros. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Joe Sylve <joe.sylve@gmail.com>
2014-04-02Merge tag 'dt-for-linus' of git://git.secretlab.ca/git/linuxLinus Torvalds1-0/+1
Pull devicetree changes from Grant Likely: "Updates to devicetree core code. This branch contains the following notable changes: - add reserved memory binding - make struct device_node a kobject and remove legacy /proc/device-tree - ePAPR conformance fixes - update in-kernel DTC copy to version v1.4.0 - preparatory changes for dynamic device tree overlays - minor bug fixes and documentation changes The most significant change in this branch is the conversion of struct device_node to be a kobject that is exposed via sysfs and removal of the old /proc/device-tree code. This simplifies the device tree handling code and tightens up the lifecycle on device tree nodes. [updated: added fix for dangling select PROC_DEVICETREE]" * tag 'dt-for-linus' of git://git.secretlab.ca/git/linux: (29 commits) dt: Remove dangling "select PROC_DEVICETREE" of: Add support for ePAPR "stdout-path" property of: device_node kobject lifecycle fixes of: only scan for reserved mem when fdt present powerpc: add support for reserved memory defined by device tree arm64: add support for reserved memory defined by device tree of: add missing major vendors of: add vendor prefix for SMSC of: remove /proc/device-tree of/selftest: Add self tests for manipulation of properties of: Make device nodes kobjects so they show up in sysfs arm: add support for reserved memory defined by device tree drivers: of: add support for custom reserved memory drivers drivers: of: add initialization code for dynamic reserved memory drivers: of: add initialization code for static reserved memory of: document bindings for reserved-memory nodes Revert "of: fix of_update_property()" kbuild: dtbs_install: new make target ARM: mvebu: Allows to get the SoC ID even without PCI enabled of: Allows to use the PCI translator without the PCI core ...
2014-03-24arm64: Remove pgprot_dmacoherent()Catalin Marinas1-3/+1
Since this macro is identical to pgprot_writecombine() and is only used in a single place, remove it completely to avoid confusion. On ARMv7+ processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a. writecombine) to avoid mismatched hardware attribute aliases (with the kernel linear mapping as Normal Cacheable). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-24arm64: Support DMA_ATTR_WRITE_COMBINELaura Abbott1-2/+12
DMA_ATTR_WRITE_COMBINE is currently ignored. Set the pgprot appropriately for non coherent opperations. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-24arm64: Implement custom mmap functions for dma mappingLaura Abbott1-0/+44
The current dma_ops do not specify an mmap function so maping falls back to the default implementation. There are at least two issues with using the default implementation: 1) The pgprot is always pgprot_noncached (strongly ordered) memory even with coherent operations 2) dma_common_mmap calls virt_to_page on the remapped non-coherent address which leads to invalid memory being mapped. Fix both these issue by implementing a custom mmap function which correctly accounts for remapped addresses and sets vm_pg_prot appropriately. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [catalin.marinas@arm.com: replaced "arm64_" with "__" prefix for consistency] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-13arm64: Add boot time configuration of Intermediate Physical Address sizeRadha Mohan Chintakuntla1-1/+7
ARMv8 supports a range of physical address bit sizes. The PARange bits from ID_AA64MMFR0_EL1 register are read during boot-time and the intermediate physical address size bits are written in the translation control registers (TCR_EL1 and VTCR_EL2). There is no change in the VA bits and levels of translation. Signed-off-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Reviewed-by: Will Deacon <Will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-13arm64: add support for reserved memory defined by device treeMarek Szyprowski1-0/+1
Enable reserved memory initialization from device tree. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Grant Likely <grant.likely@linaro.org>
2014-03-04arm64: remove unnecessary cache flush at bootMark Rutland2-7/+1
Currently we flush the entire dcache at boot within __cpu_setup, but this is unnecessary as the booting protocol demands that the dcache is invalid and off upon entering the kernel. The presence of the cache flush only serves to hide bugs in bootloaders, and is not safe in the presence of SMP. In an SMP boot scenario the CPUs enter coherency outside of the kernel, and the primary CPU enables its caches before bringing up secondary CPUs. Therefore if any secondary CPU has an entry in its cache (in violation of the boot protocol), the primary CPU might snoop it even if the secondary CPU's cache is disabled. The boot-time cache flush only serves to hide a firmware bug, and slows down a cpu boot unnecessarily. This patch removes the unnecessary boot-time cache flush. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: make __flush_dcache_all local only] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-27arm64: Implement coherent DMA API based on swiotlbCatalin Marinas2-1/+239
This patch adds support for DMA API cache maintenance on SoCs without hardware device cache coherency. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-27arm64: Use swiotlb late initialisationCatalin Marinas2-4/+8
Since arm64 does not support ISA, there is no need for early swiotlb initialisation. This patch switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(). A side effect of this is that GFP_DMA is used for the swiotlb buffer and devices with a 32-bit coherent mask are correctly supported. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-27arm64: Replace ZONE_DMA32 with ZONE_DMACatalin Marinas2-17/+18
On arm64 we do not have two DMA zones, so it does not make sense to implement ZONE_DMA32. This patch changes ZONE_DMA32 with ZONE_DMA, the latter covering 32-bit dma address space to honour GFP_DMA allocations. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-26arm64: Change misleading function names in dma-mappingRitesh Harjani1-10/+10
arm64_swiotlb_alloc/free_coherent name can be misleading somtimes with CMA support being enabled after this patch (c2104debc235b745265b64d610237a6833fd53) Change this name to be more generic: __dma_alloc/free_coherent Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com> [catalin.marinas@arm.com: renamed arm64_swiotlb_dma_ops to coherent_swiotlb_dma_ops] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-05arm64: simplify pgd_allocMark Rutland1-9/+2
Currently pgd_alloc has a redundant NULL check in its return path that can be removed with no ill effects. With that removed it's also possible to return early and eliminate the new_pgd temporary variable. This patch applies said modifications, making the logic of pgd_alloc correspond 1-1 with that of pgd_free. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-05arm64: Invalidate the TLB when replacing pmd entries during bootCatalin Marinas1-2/+10
With the 64K page size configuration, __create_page_tables in head.S maps enough memory to get started but using 64K pages rather than 512M sections with a single pgd/pud/pmd entry pointing to a pte table. create_mapping() may override the pgd/pud/pmd table entry with a block (section) one if the RAM size is more than 512MB and aligned correctly. For the end of this block to be accessible, the old TLB entry must be invalidated. Cc: <stable@vger.kernel.org> Reported-by: Mark Salter <msalter@redhat.com> Tested-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-05arm64: Align CMA sizes to PAGE_SIZELaura Abbott1-0/+1
dma_alloc_from_contiguous takes number of pages for a size. Align up the dma size passed in to page size to avoid truncation and allocation failures on sizes less than PAGE_SIZE. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-27arm64: mm: fix the function name in comment of cpu_do_switch_mmJingoo Han1-1/+1
Fix the function name of comment of cpu_do_switch_mm, because cpu_do_switch_mm is the correct name. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-22arm64: mm: fix the function name in comment of __flush_dcache_areaJingoo Han1-1/+1
Fix the function name of comment of __flush_dcache_area, because __flush_dcache_area is the correct name. Also, the missing variable 'size' is added to the comment. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-22arm64: mm: use ubfm for dcache_line_sizeJingoo Han1-2/+1
Use 'ubfm' for the bitfield move instruction; thus, single instruction can be used instead of two instructions, when getting the minimum D-cache line size from CTR_EL0 register. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19Merge tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp into upstreamCatalin Marinas1-0/+69
* tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp: arm64: add CPU power management menu/entries arm64: kernel: add PM build infrastructure arm64: kernel: add CPU idle call arm64: enable generic clockevent broadcast arm64: kernel: implement HW breakpoints CPU PM notifier arm64: kernel: refactor code to install/uninstall breakpoints arm: kvm: implement CPU PM notifier arm64: kernel: implement fpsimd CPU PM notifier arm64: kernel: cpu_{suspend/resume} implementation arm64: kernel: suspend/resume registers save/restore arm64: kernel: build MPIDR_EL1 hash function data structure arm64: kernel: add MPIDR_EL1 accessors macros Conflicts: arch/arm64/Kconfig
2013-12-19arm64: Enable CMALaura Abbott2-2/+26
arm64 bit targets need the features CMA provides. Add the appropriate hooks, header files, and Kconfig to allow this to happen. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19arm64: Warn on NULL device structure for dma APIsLaura Abbott1-0/+10
Although parts of the DMA apis may properly check for NULL devices, there may be some places that don't. Rather than fix up all the possible locations, just require a non-NULL device structure to be used for allocating/freeing. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [catalin.marinas@arm.com: s/WARN/WARN_ONCE/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-16arm64: kernel: suspend/resume registers save/restoreLorenzo Pieralisi1-0/+69
Power management software requires the kernel to save and restore CPU registers while going through suspend and resume operations triggered by kernel subsystems like CPU idle and suspend to RAM. This patch implements code that provides save and restore mechanism for the arm v8 implementation. Memory for the context is passed as parameter to both cpu_do_suspend and cpu_do_resume functions, and allows the callers to implement context allocation as they deem fit. The registers that are saved and restored correspond to the registers set actually required by the kernel to be up and running which represents a subset of v8 ISA. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-06arm64: ensure completion of TLB invalidatationMark Rutland1-1/+1
Currently there is no dsb between the tlbi in __cpu_setup and the write to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the TLB invalidation is not guaranteed to have completed at the point address translation is enabled, leading to a number of possible issues including incorrect translations and TLB conflict faults. This patch moves the tlbi in __cpu_setup above an existing dsb used to synchronise I-cache invalidation, ensuring that the TLBs have been invalidated at the point the MMU is enabled. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-12Merge tag 'devicetree-for-3.13' of ↵Linus Torvalds1-19/+6
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux Pull devicetree updates from Rob Herring: "DeviceTree updates for 3.13. This is a bit larger pull request than usual for this cycle with lots of clean-up. - Cross arch clean-up and consolidation of early DT scanning code. - Clean-up and removal of arch prom.h headers. Makes arch specific prom.h optional on all but Sparc. - Addition of interrupts-extended property for devices connected to multiple interrupt controllers. - Refactoring of DT interrupt parsing code in preparation for deferred probe of interrupts. - ARM cpu and cpu topology bindings documentation. - Various DT vendor binding documentation updates" * tag 'devicetree-for-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (82 commits) powerpc: add missing explicit OF includes for ppc dt/irq: add empty of_irq_count for !OF_IRQ dt: disable self-tests for !OF_IRQ of: irq: Fix interrupt-map entry matching MIPS: Netlogic: replace early_init_devtree() call of: Add Panasonic Corporation vendor prefix of: Add Chunghwa Picture Tubes Ltd. vendor prefix of: Add AU Optronics Corporation vendor prefix of/irq: Fix potential buffer overflow of/irq: Fix bug in interrupt parsing refactor. of: set dma_mask to point to coherent_dma_mask of: add vendor prefix for PHYTEC Messtechnik GmbH DT: sort vendor-prefixes.txt of: Add vendor prefix for Cadence of: Add empty for_each_available_child_of_node() macro definition arm/versatile: Fix versatile irq specifications. of/irq: create interrupts-extended property microblaze/pci: Drop PowerPC-ism from irq parsing of/irq: Create of_irq_parse_and_map_pci() to consolidate arch code. of/irq: Use irq_of_parse_and_map() ...
2013-11-07Merge remote-tracking branch 'grant/devicetree/next' into for-nextRob Herring1-1/+1
2013-10-30arm64: allow ioremap_cache() to use existing RAM mappingsMark Salter1-2/+18
Some drivers (ACPI notably) use ioremap_cache() to map an area which could either be outside of kernel RAM or in an already mapped reserved area of RAM. To avoid aliases with different caching attributes, ioremap() does not allow RAM to be remapped. But for ioremap_cache(), the existing kernel mapping may be used. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25arm64: big-endian: set correct endianess on kernel entryMatthew Leach1-2/+2
The endianness of memory accesses at EL2 and EL1 are configured by SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted, the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must ensure that they are set before performing any memory accesses. This patch ensures that SCTLR_EL{2,1} are configured appropriately at boot for kernels of either endianness. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-09arm64: remove unnecessary prom.h includeRob Herring1-1/+0
Remove unnecessary prom.h include in preparation to make prom.h optional. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Grant Likely <grant.likely@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2013-10-09arm64: set initrd_start/initrd_end for fdt scanRob Herring1-18/+6
In order to unify the initrd scanning for DT across architectures, make arm64 use initrd_start and initrd_end instead of the physical addresses. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org
2013-09-25arm64: use correct register width when retrieving ASIDMatthew Leach1-1/+1
The ASID is represented as an unsigned int in mm_context_t and we currently use the mmid assembler macro to access this element of the struct. This should be accessed with a register of 32-bit width. If the incorrect register width is used the ASID will be returned in bits[32:63] of the register when running under big-endian. Fix a use of the mmid macro in tlb.S to use a 32-bit access. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-20arm64: Make do_bad_area() function staticCatalin Marinas1-1/+1
This function is only called from arch/arm64/mm/fault.c. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-12arch: mm: pass userspace fault flag to generic fault handlerJohannes Weiner1-7/+10
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-12arch: mm: do not invoke OOM killer on kernel fault OOMJohannes Weiner1-7/+7
Kernel faults are expected to handle OOM conditions gracefully (gup, uaccess etc.), so they should never invoke the OOM killer. Reserve this for faults triggered in user context when it is the only option. Most architectures already do this, fix up the remaining few. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Hocko <mhocko@suse.cz> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-11mm: migrate: check movability of hugepage in unmap_and_move_huge_page()Naoya Horiguchi1-0/+5
Currently hugepage migration works well only for pmd-based hugepages (mainly due to lack of testing,) so we had better not enable migration of other levels of hugepages until we are ready for it. Some users of hugepage migration (mbind, move_pages, and migrate_pages) do page table walk and check pud/pmd_huge() there, so they are safe. But the other users (softoffline and memory hotremove) don't do this, so without this patch they can try to migrate unexpected types of hugepages. To prevent this, we introduce hugepage_migration_support() as an architecture dependent check of whether hugepage are implemented on a pmd basis or not. And on some architecture multiple sizes of hugepages are available, so hugepage_migration_support() also checks hugepage size. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-10Merge tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linuxLinus Torvalds1-2/+1
Pull device tree core updates from Grant Likely: "Generally minor changes. A bunch of bug fixes, particularly for initialization and some refactoring. Most notable change if feeding the entire flattened tree into the random pool at boot. May not be significant, but shouldn't hurt either" Tim Bird questions whether the boot time cost of the random feeding may be noticeable. And "add_device_randomness()" is definitely not some speed deamon of a function. * tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux: of/platform: add error reporting to of_amba_device_create() irq/of: Fix comment typo for irq_of_parse_and_map of: Feed entire flattened device tree into the random pool of/fdt: Clean up casting in unflattening path of/fdt: Remove duplicate memory clearing on FDT unflattening gpio: implement gpio-ranges binding document fix of: call __of_parse_phandle_with_args from of_parse_phandle of: introduce of_parse_phandle_with_fixed_args of: move of_parse_phandle() of: move documentation of of_parse_phandle_with_args of: Fix missing memory initialization on FDT unflattening of: consolidate definition of early_init_dt_alloc_memory_arch() of: Make of_get_phy_mode() return int i.s.o. const int include: dt-binding: input: create a DT header defining key codes. of/platform: Staticize of_platform_device_create_pdata() of: Specify initrd location using 64-bit dt: Typo fix OF: make of_property_for_each_{u32|string}() use parameters if OF is not enabled
2013-09-03arm64: mm: permit use of tagged pointers at EL0Will Deacon1-1/+1
TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-02arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.SCatalin Marinas1-4/+0
This string has been moved to arch/arm64/kernel/cputable.c. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-08-28arm64: Fix mapping of memory banks not ending on a PMD_SIZE boundaryCatalin Marinas1-2/+21
The map_mem() function limits the current memblock limit to PGDIR_SIZE (the initial swapper_pg_dir mapping) to avoid create_mapping() allocating memory from unmapped areas. However, if the first block is within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page configuration is enabled, create_mapping() will try to allocate a pte page. Such page may be returned by memblock_alloc() from the end of such bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet. The patch limits the current memblock limit to the aligned end of the first bank and gradually increases it as more memory is mapped. It also ensures that the start of the first bank is aligned to PMD_SIZE to avoid pte page allocation for this mapping. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: "Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com> Tested-by: "Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
2013-07-24of: Specify initrd location using 64-bitSantosh Shilimkar1-2/+1
On some PAE architectures, the entire range of physical memory could reside outside the 32-bit limit. These systems need the ability to specify the initrd location using 64-bit numbers. This patch globally modifies the early_init_dt_setup_initrd_arch() function to use 64-bit numbers instead of the current unsigned long. There has been quite a bit of debate about whether to use u64 or phys_addr_t. It was concluded to stick to u64 to be consistent with rest of the device tree code. As summarized by Geert, "The address to load the initrd is decided by the bootloader/user and set at that point later in time. The dtb should not be tied to the kernel you are booting" More details on the discussion can be found here: https://lkml.org/lkml/2013/6/20/690 https://lkml.org/lkml/2012/9/13/544 Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by: Grant Likely <grant.likely@linaro.org>
2013-07-19arm64: mm: don't treat user cache maintenance faults as writesWill Deacon1-26/+20
On arm64, cache maintenance faults appear as data aborts with the CM bit set in the ESR. The WnR bit, usually used to distinguish between faulting loads and stores, always reads as 1 and (slightly confusingly) the instructions are treated as reads by the architecture. This patch fixes our fault handling code to treat cache maintenance faults in the same way as loads. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-07-10mm: remove free_area_cacheMichel Lespinasse1-2/+0
Since all architectures have been converted to use vm_unmapped_area(), there is no remaining use for the free_area_cache. Signed-off-by: Michel Lespinasse <walken@google.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Helge Deller <deller@gmx.de> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm/microblaze: prepare for removing num_physpages and simplify mem_init()Jiang Liu1-1/+1
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Michal Simek <monstr@monstr.eu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm/ARM64: prepare for removing num_physpages and simplify mem_init()Jiang Liu1-45/+3
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm: concentrate modification of totalram_pages into the mm coreJiang Liu1-1/+1
Concentrate code to modify totalram_pages into the mm core, so the arch memory initialized code doesn't need to take care of it. With these changes applied, only following functions from mm core modify global variable totalram_pages: free_bootmem_late(), free_all_bootmem(), free_all_bootmem_node(), adjust_managed_page_count(). With this patch applied, it will be much more easier for us to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Acked-by: David Howells <dhowells@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm/ARM64: kill poison_init_mem()Jiang Liu1-14/+3
Use free_reserved_area() to poison initmem memory pages and kill poison_init_mem() on ARM64. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>