summaryrefslogtreecommitdiff
path: root/arch/sh/mm
AgeCommit message (Collapse)AuthorFilesLines
2010-04-26Merge branch 'sh/stable-updates'Paul Mundt7-3/+4
Conflicts: arch/sh/kernel/dwarf.c drivers/dma/shdma.c Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-26Merge branch 'master' of ↵Paul Mundt1-1/+1
master.kernel.org:/pub/scm/linux/kernel/git/mfleming/sh-2.6 * 'master' of master.kernel.org:/pub/scm/linux/kernel/git/mfleming/sh-2.6: sh: Use correct mask when comparing PMB DATA array values sh: Do not try merging two 128MB PMB mappings sh: Fix zImage load address when CONFIG_32BIT=y sh: Fix address to decompress at when CONFIG_32BIT=y sh: Assembly friendly __pa and __va definitions
2010-04-26sh: invoke oom-killer from page faultNick Piggin2-24/+8
As explained in commit 1c0fe6e3bd, we want to call the architecture independent oom killer when getting an unexplained OOM from handle_mm_fault, rather than simply killing current. Cc: linux-sh@vger.kernel.org Cc: linux-arch@vger.kernel.org Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-25sh: Do not try merging two 128MB PMB mappingsMatt Fleming1-1/+1
There is a logic error in pmb_merge() that means we will incorrectly try to merge two 128MB PMB mappings into one mapping. However, 256MB isn't a valid PMB map size and pmb_merge() will actually drop the second 128MB mapping. This patch allows my SDK7786 board to boot when configured with CONFIG_MEMORY_SIZE=0x10000000. Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-04-20sh: Zero out aliases counter when using SH-X3 hardware assistance.Paul Mundt1-1/+10
This zeroes out the number of cache aliases in the cache info descriptors when hardware alias avoidance is enabled. This cuts down on the amount of flushing taken care of by common code, and also permits coherency control to be disabled for the single CPU and 4k page size case. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-19sh: Enable SH-X3 hardware synonym avoidance handling.Paul Mundt3-0/+43
This enables support for the hardware synonym avoidance handling on SH-X3 CPUs for the case where dcache aliases are possible. icache handling is retained, but we flip on broadcasting of the block invalidations due to the lack of coherency otherwise on SMP. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-05Merge branch 'sh/stable-updates'Paul Mundt4-28/+75
2010-04-05Merge branch 'master' into export-slabhTejun Heo5-33/+92
2010-04-02sh: Fix up the SH-3 build for recent TLB changes.Paul Mundt4-28/+75
While the MMUCR.URB and ITLB/UTLB differentiation works fine for all SH-4 and later TLBs, these features are absent on SH-3. This splits out local_flush_tlb_all() in to SH-4 and PTEAEX copies while restoring the old SH-3 one, subsequently fixing up the build. This will probably want some further reordering and tidying in the future, but that's out of scope at present. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo7-3/+4
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-29sh: tlb debugfs support.Matt Fleming2-3/+184
Export the status of the utlb and itlb entries through debugfs. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-26sh: update the TLB replacement counter for entry wiring.Matt Fleming1-5/+17
Presently the TLB wiring code depends on MMUCR.URB for working out where to place the wired entry, but fails to take the replacment counter in to consideration. This fixes up the wiring logic and ensures that wired entries remain so. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23sh: Fix build after dynamic PMB reworkMatt Fleming1-0/+2
set_pmb_entry() is now only used by a function that is wrapped in #ifdef CONFIG_PM, so wrap set_pmb_entry() in CONFIG_PM too. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23sh: Replace unsafe manipulation of MMUCRMatt Fleming2-7/+16
Setting the TI in MMUCR causes all the TLB bits in MMUCR to be cleared. Unfortunately, the TLB wired bits are also cleared when setting the TI bit, causing any wired TLB entries to become unwired. Use local_flush_tlb_all() which implements TLB flushing in a safer manner by using the memory-mapped TLB registers. As each CPU has its own PMB the modifications in pmb_init() only affect the local CPU, so only flush the local CPU's TLB. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23sh: Flush ITLB too in PTEAEX's flush_tlb_page()Matt Fleming1-0/+2
flush_tlb_page() can be used to flush TLB entries that map executable pages. Therefore, we need to ensure that the ITLB is also flushed in local_flush_tlb_page(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-10sh: Export uncached helper symbols.Paul Mundt1-0/+4
oprofile and others need to get at these, so provide symbol exports. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-08sh: Fix up uncached offset for legacy 29-bit mode.Paul Mundt1-0/+5
The uncached_start was being set up properly for 32-bit but managed to break 29-bit in the process, fix it up. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-05sh: Move PMB debugfs entry initialization to later stagePawel Moll1-1/+1
... so the "sh_debugfs_root" is already available. Previously it wasn't and in result its path was "/sys/kernel/debug/pmb" instead of "/sys/kernel/debug/sh/pmb". Signed-off-by: Pawel Moll <pawel.moll@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-04sh: fix up MMU reset with variable PMB mapping sizes.Paul Mundt1-6/+31
Presently we run in to issues with the MMU resetting the CPU when variable sized mappings are employed. This takes a slightly more aggressive approach to keeping the TLB and cache state sane before establishing the mappings in order to cut down on races observed on SMP configurations. At the same time, we bump the VMA range up to the 0xb000...0xc000 range, as there still seems to be some undocumented behaviour in setting up variable mappings in the 0xa000...0xb000 range, resulting in reset by the TLB. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-03sh: establish PMB mappings for NUMA nodes.Paul Mundt1-0/+3
In the case of NUMA emulation when in range PPNs are being used for secondary nodes, we need to make sure that the PMB has a mapping for it before setting up the pgdat. This prevents the MMU from resetting. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-03sh: check for existing mappings for bolted PMB entries.Paul Mundt1-44/+96
When entries are being bolted unconditionally it's possible that the boot loader has established mappings that are within range that we don't want to clobber. Perform some basic validation to ensure that the new mapping is out of range before allowing the entry setup to take place. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: fixed virt/phys mapping helpers for PMB.Paul Mundt1-46/+51
This moves the pmb_remap_caller() mapping logic out in to pmb_bolt_mapping(), which enables us to establish fixed mappings in places such as the NUMA code. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: make pmb iomapping configurable.Paul Mundt1-0/+17
This plugs in an early_param for permitting transparent PMB-backed ioremapping to be enabled/disabled. For the time being, we use a default-disabled policy. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: reworked dynamic PMB mapping.Paul Mundt3-148/+189
This implements a fairly significant overhaul of the dynamic PMB mapping code. The primary change here is that the PMB gets its own VMA that follows the uncached mapping and we attempt to be a bit more intelligent with dynamic sizing, multi-entry mapping, and so forth. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02Merge branches 'sh/dmaengine', 'sh/hw-breakpoints' and 'sh/trivial'Paul Mundt1-1/+0
2010-03-01Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds1-1/+1
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits) ARM: Eliminate decompressor -Dstatic= PIC hack ARM: 5958/1: ARM: U300: fix inverted clk round rate ARM: 5956/1: misplaced parentheses ARM: 5955/1: ep93xx: move timer defines into core.c and document ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c ARM: 5953/1: ep93xx: fix broken build of clock.c ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig ARM: 5949/1: NUC900 add gpio virtual memory map ARM: 5948/1: Enable timer0 to time4 clock support for nuc910 ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk ARM: make_coherent(): fix problems with highpte, part 2 MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself ARM: 5945/1: ep93xx: include correct irq.h in core.c ARM: 5933/1: amba-pl011: support hardware flow control ARM: 5930/1: Add PKMAP area description to memory.txt. ARM: 5929/1: Add checks to detect overlap of memory regions. ARM: 5928/1: Change type of VMALLOC_END to unsigned long. ARM: 5927/1: Make delimiters of DMA area globally visibly. ARM: 5926/1: Add "Virtual kernel memory..." printout. ARM: 5920/1: OMAP4: Enable L2 Cache ... Fix up trivial conflict in arch/arm/mach-mx25/clock.c
2010-03-01sh: No need to explicitly include <linux/rwlock.h>.Robert P. J. Day1-1/+0
Since <linux/spinlock.h> already includes <linux/rwlock.h>, and the latter file will warn about not having included the former file anyway, there is no value in including rwlock.h explicitly. Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23sh: wire up SET/GET_UNALIGN_CTL.Paul Mundt1-1/+29
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from the PPC and ia64 implementations. The thread flags happen to be the logical inverse of what the global fault mode is set to, so this works out pretty cleanly. By default the global fault mode is used, with tasks now being able to override their own settings via prctl(). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23sh: allow alignment fault mode to be configured at kernel boot.Paul Mundt1-0/+2
Follow the ARM change, which is what our alignment helpers are based on in the first place. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-20MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itselfRussell King1-1/+1
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-18sh: Merge legacy and dynamic PMB modes.Paul Mundt4-47/+213
This implements a bit of rework for the PMB code, which permits us to kill off the legacy PMB mode completely. Rather than trusting the boot loader to do the right thing, we do a quick verification of the PMB contents to determine whether to have the kernel setup the initial mappings or whether it needs to mangle them later on instead. If we're booting from legacy mappings, the kernel will now take control of them and make them match the kernel's initial mapping configuration. This is accomplished by breaking the initialization phase out in to multiple steps: synchronization, merging, and resizing. With the recent rework, the synchronization code establishes page links for compound mappings already, so we build on top of this for promoting mappings and reclaiming unused slots. At the same time, the changes introduced for the uncached helpers also permit us to dynamically resize the uncached mapping without any particular headaches. The smallest page size is more than sufficient for mapping all of kernel text, and as we're careful not to jump to any far off locations in the setup code the mapping can safely be resized regardless of whether we are executing from it or not. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18sh: Use uncached I/O helpers in PMB setup.Paul Mundt1-27/+19
The PMB code is an example of something that spends an absurd amount of time running uncached when only a couple of operations really need to be. This switches over to the shiny new uncached helpers, permitting us to spend far more time running cached. Additionally, MMUCR twiddling is perfectly safe from cached space given that it's paired with a control register barrier, so fix that up, too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: PMB locking overhaul.Paul Mundt1-38/+114
This implements some locking for the PMB code. A high level rwlock is added for dealing with rw accesses on the entry map while a per-entry data structure spinlock is added to deal with the PMB entry changing out from underneath us. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Fix up dynamically created write-through PMB mappings.Paul Mundt1-24/+32
Write-through PMB mappings still require the cache bit to be set, even if they're to be flagged with a different cache policy and bufferability bit. To reduce some of the confusion surrounding the flag encoding we centralize the cache mask based on the system cache policy while we're at it. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Build PMB entry links for existing contiguous multi-page mappings.Paul Mundt1-30/+29
This plugs in entry sizing support for existing mappings and then builds on top of that for linking together entries that are mapping contiguous areas. This will ultimately permit us to coalesce mappings and promote head pages while reclaiming PMB slots for dynamic remapping. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: uncached mapping helpers.Paul Mundt3-19/+31
This adds some helper routines for uncached mapping support. This simplifies some of the cases where we need to check the uncached mapping boundaries in addition to giving us a centralized location for building more complex manipulation on top of. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: PMB tidying.Paul Mundt1-45/+38
Some overdue cleanup of the PMB code, killing off unused functionality and duplication sprinkled about the tree. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Fix up more 64-bit pgprot truncation on SH-X2 TLB.Paul Mundt2-2/+6
Both the store queue API and the PMB remapping take unsigned long for their pgprot flags, which cuts off the extended protection bits. In the case of the PMB this isn't really a problem since the cache attribute bits that we care about are all in the lower 32-bits, but we do it just to be safe. The store queue remapping on the other hand depends on the extended prot bits for enabling userspace access to the mappings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16sh: Merge the legacy PMB mapping and entry synchronization code.Paul Mundt1-93/+69
This merges the code for iterating over the legacy PMB mappings and the code for synchronizing software state with the hardware mappings. There's really no reason to do the same iteration twice, and this also buys us the legacy entry logging facility for the dynamic PMB case. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16sh: Prevent fixed slot PMB remapping from clobbering boot entries.Paul Mundt1-1/+1
The PMB initialization code walks the entries and synchronizes the software PMB state with the hardware mappings, preserving the slot index. Unfortunately pmb_alloc() only tested the bit position in the entry map and failed to set it, resulting in subsequent remaps being able to be dynamically assigned a slot that trampled an existing boot mapping with general badness ensuing. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-12sh: Isolate uncached mapping support.Paul Mundt2-3/+13
This splits out the uncached mapping support under its own config option, presently only used by 29-bit mode and 32-bit + PMB. This will make it possible to optionally add an uncached mapping on sh64 as well as booting without an uncached mapping for 32-bit. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-29sh: Kill off deprecated fixed PCI memory window accessors.Paul Mundt1-15/+0
This kills off the deprected fixed memory range accessors for the cases of non-translatable ioremapping. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-26sh: Mass ctrl_in/outX to __raw_read/writeX conversion.Paul Mundt10-52/+52
The old ctrl in/out routines are non-portable and unsuitable for cross-platform use. While drivers/sh has already been sanitized, there is still quite a lot of code that is not. This converts the arch/sh/ bits over, which permits us to flag the routines as deprecated whilst still building with -Werror for the architecture code, and to ensure that future users are not added. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21sh: Kill off the special uncached section and fixmap.Paul Mundt7-16/+10
Now that cached_to_uncached works as advertized in 32-bit mode and we're never going to be able to map < 16MB anyways, there's no need for the special uncached section. Kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21sh: Track the uncached mapping size.Paul Mundt1-7/+14
This provides a variable for tracking the uncached mapping size, and uses it for pretty printing the uncached lowmem range. Beyond this, we'll also be building on top of this for figuring out from where the remainder of P2 becomes usable when constructing unrelated mappings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20sh: pretty print virtual memory map on boot.Paul Mundt1-2/+36
This cribs the pretty printing from arch/x86/mm/init_32.c to dump the virtual memory layout on boot. This is primarily intended as a debugging aid, given that the newer CPUs have full control over their address space and as such have little to nothing in common with the legacy layout. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20sh: Correct iounmap fixmap teardown.Paul Mundt1-7/+2
iounmap_fixed() had a couple of bugs in it that caused it to effectively fail at life. The total number of pages to unmap factored in the mapping offset and aligned up to the next page boundary, which doesn't match the ioremap_fixed() behaviour. When ioremap_fixed() pegs a slot, the address in the mapping data already contains the offset displacement, and the size is recorded verbatim given that we're only interested in total number of pages required. As such, we need to calculate the total number from the original size in the unmap path as well. At the same time, there was also an off-by-1 problem in the fixmap index calculation which has also been corrected. Previously subsequent remaps of an identical fixmap index would trigger the pte_ERROR() in set_pte_phys(): arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). With this patch in place, the iounmap-driven fixmap teardown actually does what it's supposed to do. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20sh: Make 29/32-bit mode check helper generally available.Paul Mundt2-7/+5
Presently __in_29bit_mode() is only defined for the PMB case, but it's also easily derived from the CONFIG_29BIT and CONFIG_32BIT && CONFIG_PMB=n cases. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19sh64: Fix up PC casting in unaligned fixup notifier with 32bit ABI.Paul Mundt1-2/+2
Presently the build bails with the following: CC arch/sh/mm/alignment.o cc1: warnings being treated as errors arch/sh/mm/alignment.c: In function 'unaligned_fixups_notify': arch/sh/mm/alignment.c:69: warning: cast to pointer from integer of different size arch/sh/mm/alignment.c:74: warning: cast to pointer from integer of different size make[2]: *** [arch/sh/mm/alignment.o] Error 1 This is due to the fact that regs->pc is always 64-bit, while the pointer size depends on the ABI. Wrapping through instruction_pointer() takes care of the appropriate casting for both configurations. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19sh: Kill off now bogus fixmap/page wiring documentation.Paul Mundt1-15/+0
The plans for _PAGE_WIRED were detailed in a comment with the fixmap code, but as it's now all taken care of, we no longer have any reason for keeping it around, particularly since it's no longer accurate. Kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>