summaryrefslogtreecommitdiff
path: root/lib/swiotlb.c
AgeCommit message (Collapse)AuthorFilesLines
2009-01-11swiotlb: do not use sg_virt()Ian Campbell1-7/+7
Scatterlists containing HighMem pages do not have a useful virtual address. Use the physical address instead. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-11swiotlb: range_needs_mapping should take a physical address.Ian Campbell1-5/+5
The swiotlb_arch_range_needs_mapping() hook should take a physical address rather than a virtual address in order to support highmem pages. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-11Merge branch 'linus' into core/iommuIngo Molnar1-1/+1
2009-01-06Merge branch 'core-fixes-for-linus' of ↵Linus Torvalds1-137/+100
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: rcu: fix rcutorture bug rcu: eliminate synchronize_rcu_xxx macro rcu: make treercu safe for suspend and resume rcu: fix rcutree grace-period-latency bug on small systems futex: catch certain assymetric (get|put)_futex_key calls futex: make futex_(get|put)_key() calls symmetric locking, percpu counters: introduce separate lock classes swiotlb: clean up EXPORT_SYMBOL usage swiotlb: remove unnecessary declaration swiotlb: replace architecture-specific swiotlb.h with linux/swiotlb.h swiotlb: add support for systems with highmem swiotlb: store phys address in io_tlb_orig_addr array swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()
2009-01-06swiotlb: struct device - replace bus_id with dev_name(), dev_set_name()Kay Sievers1-1/+1
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-01-06x86, ia64: remove duplicated swiotlb codeFUJITA Tomonori1-30/+18
This adds swiotlb_map_page and swiotlb_unmap_page to lib/swiotlb.c and remove IA64 and X86's swiotlb_map_page and swiotlb_unmap_page. This also removes unnecessary swiotlb_map_single, swiotlb_map_single_attrs, swiotlb_unmap_single and swiotlb_unmap_single_attrs. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-06x86, ia64: convert to use generic dma_map_ops structFUJITA Tomonori1-8/+10
This converts X86 and IA64 to use include/linux/dma-mapping.h. It's a bit large but pretty boring. The major change for X86 is converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping operations. The major changes for IA64 is using map_page and unmap_page instead of map_single and unmap_single. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-05Merge branch 'core/iommu' into core/urgentIngo Molnar1-137/+100
Conflicts: lib/swiotlb.c
2009-01-04swiotlb: Don't include linux/swiotlb.h twice in lib/swiotlb.cJesper Juhl1-1/+0
There's no point in including the linux/swiotlb.h header twice in lib/swiotlb.c - this patch gets rid of the unneeded include. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-02swiotlb: add missing __init annotationsRoland Dreier1-1/+1
Impact: cleanup, reduce kernel size a bit The current kernel build warns: WARNING: vmlinux.o(.text+0x11458): Section mismatch in reference from the function swiotlb_alloc_boot() to the function .init.text:__alloc_bootmem_low() The function swiotlb_alloc_boot() references the function __init __alloc_bootmem_low(). This is often because swiotlb_alloc_boot lacks a __init annotation or the annotation of __alloc_bootmem_low is wrong. WARNING: vmlinux.o(.text+0x1011f2): Section mismatch in reference from the function swiotlb_late_init_with_default_size() to the function .init.text:__alloc_bootmem_low() The function swiotlb_late_init_with_default_size() references the function __init __alloc_bootmem_low(). This is often because swiotlb_late_init_with_default_size lacks a __init annotation or the annotation of __alloc_bootmem_low is wrong. and indeed the functions calling __alloc_bootmem_low() can be marked __init as well. Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-28swiotlb: clean up EXPORT_SYMBOL usageFUJITA Tomonori1-14/+14
Impact: cleanup swiotlb uses EXPORT_SYMBOL in an inconsistent way. Some functions use EXPORT_SYMBOL at the end of functions. Some use it at the end of swiotlb.c. This cleans up swiotlb to use EXPORT_SYMBOL in a consistent way (at the end of functions). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-28swiotlb: remove unnecessary declarationFUJITA Tomonori1-3/+0
Impact: cleanup Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-28swiotlb: add support for systems with highmemBecky Bruce1-17/+51
Impact: extend code for highmem - existing users unaffected On highmem systems, the original dma buffer might not have a virtual mapping - we need to kmap it in to perform the bounce. Extract the code that does the actual copy into a function that does the kmap if highmem is enabled, and default to the normal swiotlb memcpy if not. [ ported by Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> ] Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-28swiotlb: store phys address in io_tlb_orig_addr arrayBecky Bruce1-90/+30
Impact: refactor code, cleanup When we enable swiotlb for platforms that support HIGHMEM, we can no longer store the virtual address of the original dma buffer, because that buffer might not have a permament mapping. Change the swiotlb code to instead store the physical address of the original buffer. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-28swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()Jeremy Fitzhardinge1-31/+22
Impact: extend functions with a (yet unused) parameter, update callsites Some architectures need it - in preparation for highmem swiotlb. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: consolidate swiotlb info message printingIan Campbell1-5/+28
Impact: clean up swiotlb printks Remove duplicated swiotlb info printing, and make it more detailed. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: support bouncing of HighMem pagesJeremy Fitzhardinge1-33/+89
Impact: prepare the swiotlb code for HighMem struct pages This requires us to treat DMA regions in terms of page+offset rather than virtual addressing since a HighMem page may not have a mapping. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: factor out copy to/from deviceJeremy Fitzhardinge1-4/+13
Impact: generalize IO bounce memcpys Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: add arch hook to force mappingIan Campbell1-2/+13
Impact: generalize the sw-IOTLB range checks Some architectures require special rules to determine whether a range needs mapping or not. This adds a weak function for architectures to override. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: allow architectures to override phys<->bus<->phys conversionsIan Campbell1-16/+36
Impact: generalize phys<->bus<->phys conversions in the swiotlb code Architectures may need to override these conversions. Implement a __weak hook point containing the default implementation. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: add comment where we handle the overflow of a dma mask on 32 bitIan Campbell1-0/+4
Impact: cleanup Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-16swiotlb: move some definitions to headerIan Campbell1-13/+1
Impact: cleanup Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-16swiotlb: allow architectures to override swiotlb pool allocationJeremy Fitzhardinge1-3/+13
Impact: generalize swiotlb allocation code Architectures may need to allocate memory specially for use with the swiotlb. Create the weak function swiotlb_alloc_boot() and swiotlb_alloc() defaulting to the current behaviour. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-17swiotlb: use coherent_dma_mask in alloc_coherentFUJITA Tomonori1-3/+7
Impact: fix DMA buffer allocation coherency bug in certain configs This patch fixes swiotlb to use dev->coherent_dma_mask in swiotlb_alloc_coherent(). coherent_dma_mask is a subset of dma_mask (equal to it most of the time), enumerating the address range that a given device is able to DMA to/from in a cache-coherent way. But currently, swiotlb uses dev->dma_mask in alloc_coherent() implicitly via address_needs_mapping(), but alloc_coherent is really supposed to use coherent_dma_mask. This bug could break drivers that uses smaller coherent_dma_mask than dma_mask (though the current code works for the majority that use the same mask for coherent_dma_mask and dma_mask). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: tony.luck@intel.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-23swiotlb: remove panic for alloc_coherent failureFUJITA Tomonori1-2/+4
swiotlb_alloc_coherent calls panic() when allocated swiotlb pages is not fit for a device's dma mask. However, alloc_coherent failure is not a disaster at all. AFAIK, none of other x86 and IA64 IOMMU implementations don't crash in case of alloc_coherent failure. There are some drivers that don't check alloc_coherent failure but not many (about ten and I've already started to fix some of them). alloc_coherent returns NULL in case of failure so it's likely that these guilty drivers crash immediately. So swiotlb doesn't need to call panic() just for them. Reported-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19convert swiotlb to use dma_get_maskFUJITA Tomonori1-5/+1
swiotlb can use dma_get_mask() instead of the homegrown function. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: tony.luck@intel.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10swiotlb: convert swiotlb to use is_buffer_dma_capable helper functionFUJITA Tomonori1-7/+8
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08swiotlb: add is_swiotlb_buffer helper functionFUJITA Tomonori1-5/+9
This adds is_swiotlb_buffer() helper function to see whether a buffer belongs to the swiotlb buffer or not. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08swiotlb: use unmap_single instead of swiotlb_unmap_single in ↵FUJITA Tomonori1-1/+1
swiotlb_free_coherent We don't need any check in swiotlb_unmap_single here. unmap_single is appropriate. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08swiotlb: use map_single instead of swiotlb_map_single in swiotlb_alloc_coherentFUJITA Tomonori1-5/+2
We always need swiotlb memory here so address_needs_mapping and swiotlb_force testings are irrelevant. map_single should be used here instead of swiotlb_map_single. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08swiotlb: remove GFP_DMA hack in swiotlb_alloc_coherentFUJITA Tomonori1-7/+0
The callers are supposed to set up the gfp flags appropriately. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-26dma-mapping: add the device argument to dma_mapping_error()FUJITA Tomonori1-2/+2
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER architecture does: This enables us to cleanly fix the Calgary IOMMU issue that some devices are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423). I think that per-device dma_mapping_ops support would be also helpful for KVM people to support PCI passthrough but Andi thinks that this makes it difficult to support the PCI passthrough (see the above thread). So I CC'ed this to KVM camp. Comments are appreciated. A pointer to dma_mapping_ops to struct dev_archdata is added. If the pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's NULL, the system-wide dma_ops pointer is used as before. If it's useful for KVM people, I plan to implement a mechanism to register a hook called when a new pci (or dma capable) device is created (it works with hot plugging). It enables IOMMUs to set up an appropriate dma_mapping_ops per device. The major obstacle is that dma_mapping_error doesn't take a pointer to the device unlike other DMA operations. So x86 can't have dma_mapping_ops per device. Note all the POWER IOMMUs use the same dma_mapping_error function so this is not a problem for POWER but x86 IOMMUs use different dma_mapping_error functions. The first patch adds the device argument to dma_mapping_error. The patch is trivial but large since it touches lots of drivers and dma-mapping.h in all the architecture. This patch: dma_mapping_error() doesn't take a pointer to the device unlike other DMA operations. So we can't have dma_mapping_ops per device. Note that POWER already has dma_mapping_ops per device but all the POWER IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device argument. [akpm@linux-foundation.org: fix sge] [akpm@linux-foundation.org: fix svc_rdma] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix bnx2x] [akpm@linux-foundation.org: fix s2io] [akpm@linux-foundation.org: fix pasemi_mac] [akpm@linux-foundation.org: fix sdhci] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix sparc] [akpm@linux-foundation.org: fix ibmvscsi] Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29dma/ia64: update ia64 machvecs, swiotlb.cArthur Kepner1-8/+42
Change all ia64 machvecs to use the new dma_*map*_attrs() interfaces. Implement the old dma_*map_*() interfaces in terms of the corresponding new interfaces. For ia64/sn, make use of one dma attribute, DMA_ATTR_WRITE_BARRIER. Introduce swiotlb_*map*_attrs() functions. Signed-off-by: Arthur Kepner <akepner@sgi.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Jes Sorensen <jes@sgi.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29swiotlb: use iommu_is_span_boundary helper functionFUJITA Tomonori1-11/+3
iommu_is_span_boundary in lib/iommu-helper.c was exported for PARISC IOMMUs (commit 3715863aa142c4f4c5208f5f3e5e9bac06006d2f). SWIOTLB can use it instead of the homegrown function. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29lib/swiotlb.c: cleanupsAndrew Morton1-46/+43
There's a pointlessly braced block of code in there. Remove the braces and save a tabstop. Cc: Andi Kleen <ak@suse.de> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jan Beulich <jbeulich@novell.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-03-13avoid endless loops in lib/swiotlb.cJan Beulich1-14/+16
Commit 681cc5cd3efbeafca6386114070e0bfb5012e249 ("iommu sg merging: swiotlb: respect the segment boundary limits") introduced two possibilities for entering an endless loop in lib/swiotlb.c: - if max_slots is zero (possible if mask is ~0UL) - if the number of slots requested fits into a swiotlb segment, but is too large for the part of a segment which remains after considering offset_slots This fixes them Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05iommu sg merging: swiotlb: respect the segment boundary limitsFUJITA Tomonori1-6/+35
This patch makes swiotlb not allocate a memory area spanning LLD's segment boundary. is_span_boundary() judges whether a memory area spans LLD's segment boundary. If map_single finds such a area, map_single tries to find the next available memory area. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Greg KH <greg@kroah.com> Cc: Jeff Garzik <jeff@garzik.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22Update swiotlb to use sg helpersJens Axboe1-1/+1
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-17swiotlb: fix map_sg failure handlingFUJITA Tomonori1-1/+1
sg list elements might not be continuous. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-16swiotlb: sg chaining supportJens Axboe1-7/+12
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-12dma_free_coherent() needs irqs enabled (sigh)David Brownell1-0/+1
On at least ARM (and I'm told MIPS too) dma_free_coherent() has a newish call context requirement: unlike its dma_alloc_coherent() sibling, it may not be called with IRQs disabled. (This was new behavior on ARM as of late 2005, caused by ARM SMP updates.) This little surprise can be annoyingly driver-visible. Since it looks like that restriction won't be removed, this patch changes the definition of the API to include that requirement. Also, to help catch nonportable drivers, it updates the x86 and swiotlb versions to include the relevant warnings. (I already observed that it trips on the bus_reset_tasklet of the new firewire_ohci driver.) Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Cc: David Miller <davem@davemloft.net> Acked-by: Russell King <rmk@arm.linux.org.uk> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2007-07-21Fix swiotlb_sync_single_range()Keir Fraser1-1/+4
If the swiotlb maps a multi-slab region, swiotlb_sync_single_range() can be invoked to sync a sub-region which does not include the first slab. Unfortunately io_tlb_orig_addr[] is only initialised for the first slab, and hence the call to sync_single() will read a garbage orig_addr in this case. This patch fixes the issue by initialising all mapped slabs in io_tlb_orig_addr[]. It also correctly adjusts the buffer pointer in sync_single() to handle the case that the given dma_addr is not aligned on a slab boundary. Signed-off-by: Keir Fraser <keir.fraser@cl.cam.ac.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Acked-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08fix section mismatch warning in lib/swiotlb.cSam Ravnborg1-1/+0
kbuild spits outs following warning on a defconfig x86_64 build: WARNING: swiotlb.o - Section mismatch: reference to .init.text:swiotlb_init from __ksymtab between '__ksymtab_swiotlb_init' (at offset 0xa0) and '__ksymtab_swiotlb_free_coherent' This warning happens because the function swiotlb_init is marked __init and EXPORT_SYMBOL(). A 'git grep swiotlb_init' showed no users in drivers/ so remove the EXPORT_SYMBOL. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Andi Kleen <ak@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-03-06Revert "[IA64] swiotlb abstraction (e.g. for Xen)"Tony Luck1-149/+35
This reverts commit 51099005ab8e09d68a13fea8d55bc739c1040ca6.
2007-02-12[PATCH] swiotlb uninliningsAndrew Morton1-4/+4
Optimise swiotlb.c for size. text data bss dec hex filename 5009 89 64 5162 142a lib/swiotlb.o-before 4666 89 64 4819 12d3 lib/swiotlb.o-after For some reason my gcc (4.0.2) doesn't want to tailcall these things. swiotlb_sync_sg_for_device: pushq %rbp # movl $1, %r8d #, movq %rsp, %rbp #, call swiotlb_sync_sg # leave ret .size swiotlb_sync_sg_for_device, .-swiotlb_sync_sg_for_device .section .text.swiotlb_sync_sg_for_cpu,"ax",@progbits .globl swiotlb_sync_sg_for_cpu .type swiotlb_sync_sg_for_cpu, @function swiotlb_sync_sg_for_cpu: pushq %rbp # xorl %r8d, %r8d # movq %rsp, %rbp #, call swiotlb_sync_sg # leave ret Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-05[IA64] swiotlb abstraction (e.g. for Xen)Jan Beulich1-35/+149
Add abstraction so that the file can be used by environments other than IA64 and EM64T, namely for Xen. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-02-05[IA64] swiotlb cleanupJan Beulich1-24/+32
- add proper __init decoration to swiotlb's init code (and the code calling it, where not already the case) - replace uses of 'unsigned long' with dma_addr_t where appropriate - do miscellaneous simplicfication and cleanup Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-02-05[IA64] make swiotlb use bus_to_virt/virt_to_busJan Beulich1-16/+17
Convert all phys_to_virt/virt_to_phys uses to bus_to_virt/virt_to_bus, as is what is meant and what is needed in (at least) some virtualized environments like Xen. Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-02-05[IA64] swiotlb bug fixesJan Beulich1-25/+8
This patch fixes - marking I-cache clean of pages DMAed to now only done for IA64 - broken multiple inclusion in include/asm-x86_64/swiotlb.h - missing call to mark_clean in swiotlb_sync_sg() - a (perhaps only theoretical) issue in swiotlb_dma_supported() when io_tlb_end is exactly at the end of memory Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-03-24BUG_ON() Conversion in lib/swiotlb.cEric Sesterhenn1-20/+12
this changes if() BUG(); constructs to BUG_ON() which is cleaner, contains unlikely() and can better optimized away. Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>