diff options
author | Christoph Hellwig <hch@lst.de> | 2018-09-30 16:13:33 -0700 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2018-10-19 08:48:28 +0200 |
commit | fafadcd16595c1df82df399f62421718ec9bf70a (patch) | |
tree | c13da3c6e387183a5edd19349ed195c0699ac116 /include/linux/swiotlb.h | |
parent | c4dae366925f929749b2a26efa53b561904a9a4f (diff) |
swiotlb: don't dip into swiotlb pool for coherent allocations
All architectures that support swiotlb also have a zone that backs up
these less than full addressing allocations (usually ZONE_DMA32).
Because of that it is rather pointless to fall back to the global swiotlb
buffer if the normal dma direct allocation failed - the only thing this
will do is to eat up bounce buffers that would be more useful to serve
streaming mappings.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Diffstat (limited to 'include/linux/swiotlb.h')
-rw-r--r-- | include/linux/swiotlb.h | 5 |
1 files changed, 0 insertions, 5 deletions
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f847c1b265c4..a387b59640a4 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -67,11 +67,6 @@ extern void swiotlb_tbl_sync_single(struct device *hwdev, /* Accessory functions. */ -void *swiotlb_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle, - gfp_t flags, unsigned long attrs); -void swiotlb_free(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_addr, unsigned long attrs); - extern dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, |