summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2013-10-31ARM: 7805/1: mm: change max*pfn to include the physical offset of memorySantosh Shilimkar2-6/+12
Most of the kernel code assumes that max*pfn is maximum pfns because the physical start of memory is expected to be PFN0. Since this assumption is not true on ARM architectures, the meaning of max*pfn is number of memory pages. This is done to keep drivers happy which are making use of of these variable to calculate the dma bounce limit using dma_mask. Now since we have a architecture override possibility for DMAable maximum pfns, lets make meaning of max*pfns as maximum pnfs on ARM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculationsSantosh Shilimkar1-1/+2
DMA bounce limit is the maximum direct DMA'able memory beyond which bounce buffers has to be used to perform dma operations. MMC queue layr relies on dma_mask but its calculation is based on max_*pfn which don't have uniform meaning across architectures. So make use of dma_max_pfn() which is expected to return the DMAable maximum pfn value across architectures. Cc: Chris Ball <cjb@laptop.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31ARM: 7796/1: scsi: Use dma_max_pfn(dev) helper for bounce_limit calculationsSantosh Shilimkar1-1/+1
DMA bounce limit is the maximum direct DMA'able memory beyond which bounce buffers has to be used to perform dma operations. SCSI driver relies on dma_mask but its calculation is based on max_*pfn which don't have uniform meaning across architectures. So make use of dma_max_pfn() which is expected to return the DMAable maximum pfn value across architectures. Cc: linux-scsi@vger.kernel.org Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31ARM: 7795/1: mm: dma-mapping: Add dma_max_pfn(dev) helper functionSantosh Shilimkar1-0/+7
Most of the kernel assumes that PFN0 is the start of the physical memory (RAM). This assumptions is not true on most of the ARM SOCs and hence and if one try to update the ARM port to follow the assumptions, we end of breaking the dma bounce limit for few block layer drivers. One such example is trying to unify the meaning of max*_pfn on ARM as the bootmem layer expects, breaks few block layer driver dma bounce limit. To fix this problem, we introduce dma_max_pfn(dev) generic helper with a possibility of override from the architecture code. The helper converts a DMA bitmask of bits to a block PFN number. In all the generic cases, it is just "dev->dma_mask >> PAGE_SHIFT" and hence default behavior is maintained as is. Subsequent patches will make use of the helper. No functional change. Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31ARM: 7794/1: block: Rename parameter dma_mask to max_addr for ↵Santosh Shilimkar1-4/+4
blk_queue_bounce_limit() The blk_queue_bounce_limit() API parameter 'dma_mask' is actually the maximum address the device can handle rather than a dma_mask. Rename it accordingly to avoid it being interpreted as dma_mask. No functional change. The idea is to fix the bad assumptions about dma_mask wherever it could be miss-interpreted. Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31ARM: DMA-API: better handing of DMA masks for coherent allocationsRussell King3-6/+49
We need to start treating DMA masks as something which is specific to the bus that the device resides on, otherwise we're going to hit all sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit systems, where memory is offset from PFN 0. In order to start doing this, we convert the DMA mask to a PFN using the device specific dma_to_pfn() macro. This is the reverse of the pfn_to_dma() macro which is used to get the DMA address for the device. This gives us a PFN mask, which we can then check against the PFN limit of the DMA zone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31ARM: 7857/1: dma: imx-sdma: setup dma maskPhilippe Retornaz1-0/+4
The dma mask is not configured in the current code. This was triggered by soc-dmaengine-pcm which allocate the dma buffers with the imx-sdma as device. This commit fix audio on imx31. Signed-off-by: Philippe Rétornaz <philippe.retornaz@epfl.ch> Acked-by: Sascha Hauer <s.hauer@pengutronix.de> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: firmware/google/gsmi.c: avoid direct access to DMA masksRussell King1-5/+8
This driver doesn't need to directly access DMA masks if it uses the platform_device_register_full() API rather than platform_device_register_simple() - the former function can initialize the DMA mask appropriately. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: dcdbas: update DMA mask handingRussell King1-13/+19
dcdbas was explicitly initializing DMA masks thusly: dcdbas_pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); dcdbas_pdev->dev.dma_mask = &dcdbas_pdev->dev.coherent_dma_mask; which bypasses the architecture check. Moreover, it is creating the dcdbas_pdev device itself, and using the platform_device_register_full() avoids some of this explicit initialization. Convert the driver to use platform_device_register_full(), and as it makes use of coherent DMA, also call dma_set_coherent_mask() to ensure that the architecture gets to check the mask. Tested-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: dma: edma.c: no need to explicitly initialize DMA masksRussell King1-4/+6
register_platform_device_full() can setup the DMA mask provided the appropriate member is set in struct platform_device_info. So lets make that be the case. This avoids a direct reference to the DMA masks by this driver. While here, add the dma_set_mask_and_coherent() call which the DMA API requires DMA-using drivers to call. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: usb: musb: use platform_device_register_full() to avoid directly ↵Russell King4-128/+68
messing with dma masks Use platform_device_register_full() for those drivers which can, to avoid messing directly with DMA masks. This can only be done when the driver does not need to access the allocated musb platform device from within its callbacks, which may be called during the musb device probing. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: crypto: remove last references to 'static struct device *dev'Russell King1-5/+8
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: crypto: fix ixp4xx crypto platform device supportRussell King1-20/+17
Don't statically allocate struct device's in modules, and shut the warning up with an empty release() function. There's a reason that warning is there and that's not for people to hide in this way. It's there to persuade people to use the correct APIs to allocate platform devices. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: others: use dma_set_coherent_mask()Russell King3-4/+12
The correct way for a driver to specify the coherent DMA mask is not to directly access the field in the struct device, but to use dma_set_coherent_mask(). Only arch and bus code should access this member directly. Convert all direct write accesses to using the correct API. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: staging: use dma_set_coherent_mask()Russell King3-5/+12
The correct way for a driver to specify the coherent DMA mask is not to directly access the field in the struct device, but to use dma_set_coherent_mask(). Only arch and bus code should access this member directly. Convert all direct write accesses to using the correct API. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: usb: use new dma_coerce_mask_and_coherent()Russell King17-49/+18
Acked-by: Felipe Balbi <balbi@ti.com> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: usb: use dma_set_coherent_mask()Russell King19-38/+64
The correct way for a driver to specify the coherent DMA mask is not to directly access the field in the struct device, but to use dma_set_coherent_mask(). Only arch and bus code should access this member directly. Convert all direct write accesses to using the correct API. Acked-by: Felipe Balbi <balbi@ti.com> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: parport: parport_pc.c: use dma_coerce_mask_and_coherent()Russell King1-2/+6
The code sequence: dev->coherent_dma_mask = DMA_BIT_MASK(24); dev->dma_mask = &dev->coherent_dma_mask; bypasses the architectures check on the DMA mask. It can be replaced with dma_coerce_mask_and_coherent(), avoiding the direct initialization of this mask. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: net: octeon: use dma_coerce_mask_and_coherent()Russell King1-2/+3
The code sequence: pdev->dev.coherent_dma_mask = DMA_BIT_MASK(64); pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; bypasses the architectures check on the DMA mask. It can be replaced with dma_coerce_mask_and_coherent(), avoiding the direct initialization of this mask. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: net: nxp/lpc_eth: use dma_coerce_mask_and_coherent()Russell King1-2/+4
The code sequence: pldat->pdev->dev.coherent_dma_mask = 0xFFFFFFFF; pldat->pdev->dev.dma_mask = &pldat->pdev->dev.coherent_dma_mask; bypasses the architectures check on the DMA mask. It can be replaced with dma_coerce_mask_and_coherent(), avoiding the direct initialization of this mask. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: mmc: sdhci-acpi: use dma_coerce_mask_and_coherent()Russell King1-2/+3
The code sequence: dev->dma_mask = &dev->coherent_dma_mask; dev->coherent_dma_mask = dma_mask; bypasses the architectures check on the DMA mask. It can be replaced with dma_coerce_mask_and_coherent(), avoiding the direct initialization of this mask. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: media: omap3isp: use dma_coerce_mask_and_coherent()Russell King2-6/+3
The code sequence: isp->raw_dmamask = DMA_BIT_MASK(32); isp->dev->dma_mask = &isp->raw_dmamask; isp->dev->coherent_dma_mask = DMA_BIT_MASK(32); bypasses the architectures check on the DMA mask. It can be replaced with dma_coerce_mask_and_coherent(), avoiding the direct initialization of this mask. Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: dma: dw_dmac.c: convert to use dma_coerce_mask_and_coherent()Russell King1-5/+3
This code sequence: if (!pdev->dev.dma_mask) { pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); } bypasses the architectures check on the DMA mask. It can be replaced with dma_coerce_mask_and_coherent(), avoiding the direct initialization of this mask. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: ata: pata_octeon_cf: convert to use dma_coerce_mask_and_coherent()Russell King1-2/+3
Convert this code sequence: pdev->dev.coherent_dma_mask = DMA_BIT_MASK(64); pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; to use dma_coerce_mask_and_coherent() to avoid bypassing the architecture check on the DMA mask. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: sound: fix dma mask handling in a lot of driversRussell King16-104/+61
This code sequence is unsafe in modules: static u64 mask = DMA_BIT_MASK(something); ... if (!dev->dma_mask) dev->dma_mask = &mask; as if a module is reloaded, the mask will be pointing at the original module's mask address, and this can lead to oopses. Moreover, they all follow this with: if (!dev->coherent_dma_mask) dev->coherent_dma_mask = mask; where 'mask' is the same value as the statically defined mask, and this bypasses the architecture's check on whether the DMA mask is possible. Fix these issues by using the new dma_coerce_coherent_and_mask() function. Acked-by: Mark Brown <broonie@linaro.org> Acked-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: provide a helper to setup DMA masksRussell King1-0/+10
Many drivers contain code such as: dev->dma_mask = &dev->coherent_dma_mask; dev->coherent_dma_mask = MASK; Let's move this pattern out of drivers and have the DMA API provide a helper for it. This helper uses dma_set_mask_and_coherent() to allow platform issues to be properly dealt with via dma_set_mask()/ dma_is_supported(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: usb: ohci-sa1111: add a note about DMA masksRussell King1-0/+6
Add a comment to explain why this driver doesn't call any of the DMA API dma_set_mask() functions. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: video: clcd: add dma_set_mask_and_coherent() callRussell King1-0/+5
The DMA API requires drivers to call the appropriate dma_set_mask() functions before doing any DMA mapping. Add this required call to the AMBA PL08x driver. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: dma: pl330: add dma_set_mask_and_coherent() callRussell King1-0/+4
The DMA API requires drivers to call the appropriate dma_set_mask() functions before doing any DMA mapping. Add this required call to the AMBA PL330 driver. Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: dma: pl08x: add dma_set_mask_and_coherent() callRussell King1-0/+5
The DMA API requires drivers to call the appropriate dma_set_mask() functions before doing any DMA mapping. Add this required call to the AMBA PL08x driver. Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: amba: get rid of separate dma_maskRussell King3-10/+1
AMBA Primecell devices always treat streaming and coherent DMA exactly the same, so there's no point in having the masks separated. Acked-by: Grant Likely <grant.likely@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: usb: ssb-hcd: replace dma_set_mask()+dma_set_coherent_mask() with ↵Russell King1-2/+1
new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: usb: bcma: replace dma_set_mask()+dma_set_coherent_mask() with new ↵Russell King1-2/+1
helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-31DMA-API: media: dt3155v4l: replace dma_set_mask()+dma_set_coherent_mask() ↵Russell King1-4/+1
with new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Acked-by: Hans Verkuil <hans.verkuil@cisco.com> Acked-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: staging: et131x: replace dma_set_mask()+dma_set_coherent_mask() ↵Russell King1-15/+2
with new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: block: nvme-core: replace dma_set_mask()+dma_set_coherent_mask() ↵Russell King1-6/+4
with new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: ppc: vio.c: replace dma_set_mask()+dma_set_coherent_mask() with new ↵Russell King1-2/+1
helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: b43legacy: replace dma_set_mask()+dma_set_coherent_mask() with ↵Russell King1-6/+3
new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: b43: replace dma_set_mask()+dma_set_coherent_mask() with new ↵Russell King1-6/+3
helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: sfc/efx.c: replace dma_set_mask()+dma_set_coherent_mask() with ↵Russell King1-11/+1
new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Acked-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: intel/e1000: replace dma_set_mask()+dma_set_coherent_mask() ↵Russell King1-7/+2
with new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: emulex/benet: replace dma_set_mask()+dma_set_coherent_mask() ↵Russell King1-10/+2
with new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: broadcom/bnx2x: replace dma_set_mask()+dma_set_coherent_mask() ↵Russell King1-6/+2
with new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: broadcom/b44: replace dma_set_mask()+dma_set_coherent_mask() ↵Russell King1-2/+1
with new helper Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: intel/ixgbevf: fix 32-bit DMA mask handlingRussell King1-10/+5
The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA " "configuration, aborting\n"); goto err_dma; } } pci_using_dac = 0; } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: intel/ixgbe: fix 32-bit DMA mask handlingRussell King1-10/+5
The fallback to 32-bit DMA mask is rather odd: if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA configuration, aborting\n"); goto err_dma; } } pci_using_dac = 0; } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: intel/ixgb: fix 32-bit DMA mask handlingRussell King1-11/+5
The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { pr_err("No usable DMA configuration, aborting\n"); goto err_dma_mask; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: intel/igbvf: fix 32-bit DMA mask handlingRussell King1-12/+6
The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA " "configuration, aborting\n"); goto err_dma; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: intel/igb: fix 32-bit DMA mask handlingRussell King1-12/+6
The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA configuration, aborting\n"); goto err_dma; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-09-21DMA-API: net: intel/e1000e: fix 32-bit DMA mask handlingRussell King1-12/+6
The fallback to 32-bit DMA mask is rather odd: err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); if (!err) pci_using_dac = 1; } else { err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); if (err) { dev_err(&pdev->dev, "No usable DMA configuration, aborting\n"); goto err_dma; } } } This means we only set the coherent DMA mask in the fallback path if the DMA mask set failed, which is silly. This fixes it to set the coherent DMA mask only if dma_set_mask() succeeded, and to error out if either fails. Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>