summaryrefslogtreecommitdiff
path: root/drivers/char/agp/via-agp.c
diff options
context:
space:
mode:
authorEric Sandeen <sandeen@redhat.com>2016-06-23 16:54:46 -0500
committerDan Williams <dan.j.williams@intel.com>2016-06-27 12:18:44 -0700
commit023954351fae0e34ba247cff4d798c98290b20a4 (patch)
treef7e85f3898ea6673dcee769a323790a288f06e5c /drivers/char/agp/via-agp.c
parent4995734e973a2c2e9c6f6413cbad9913fc4df0dc (diff)
dax: fix offset overflow in dax_io
This isn't functionally apparent for some reason, but when we test io at extreme offsets at the end of the loff_t rang, such as in fstests xfs/071, the calculation of "max" in dax_io() can be wrong due to pos + size overflowing. For example, # xfs_io -c "pwrite 9223372036854771712 512" /mnt/test/file enters dax_io with: start 0x7ffffffffffff000 end 0x7ffffffffffff200 and the rounded up "size" variable is 0x1000. This yields: pos + size 0x8000000000000000 (overflows loff_t) end 0x7ffffffffffff200 Due to the overflow, the min() function picks the wrong value for the "max" variable, and when we send (max - pos) into i.e. copy_from_iter_pmem() it is also the wrong value. This somehow(tm) gets magically absorbed without incident, probably because iter->count is correct. But it seems best to fix it up properly by comparing the two values as unsigned. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Diffstat (limited to 'drivers/char/agp/via-agp.c')
0 files changed, 0 insertions, 0 deletions