diff options
author | Jerome Glisse <jglisse@redhat.com> | 2010-01-07 12:39:21 +0100 |
---|---|---|
committer | Dave Airlie <airlied@redhat.com> | 2010-01-08 13:09:59 +1000 |
commit | cafe6609d6dc0a6a278f9fdbb59ce4d761a35ddd (patch) | |
tree | a3e15eabffd6e10bed1ef639fc2f2e087c67b047 /drivers/gpu/drm/radeon/r600_blit_kms.c | |
parent | 62cdc0c20663ef840a94850892517b2b7f584904 (diff) |
drm/radeon/kms: Schedule host path read cache flush through the ring V2
R300 family will hard lockup if host path read cache flush is
done through MMIO to HOST_PATH_CNTL. But scheduling same flush
through ring seems harmless. This patch remove the hdp_flush
callback and add a flush after each fence emission which means
a flush after each IB schedule. Thus we should have same behavior
without the hard lockup.
Tested on R100,R200,R300,R400,R500,R600,R700 family.
V2: Adjust fence counts in r600_blit_prepare_copy()
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Diffstat (limited to 'drivers/gpu/drm/radeon/r600_blit_kms.c')
-rw-r--r-- | drivers/gpu/drm/radeon/r600_blit_kms.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/drivers/gpu/drm/radeon/r600_blit_kms.c b/drivers/gpu/drm/radeon/r600_blit_kms.c index 9aecafb51b66..8787ea89dc6e 100644 --- a/drivers/gpu/drm/radeon/r600_blit_kms.c +++ b/drivers/gpu/drm/radeon/r600_blit_kms.c @@ -577,9 +577,9 @@ int r600_blit_prepare_copy(struct radeon_device *rdev, int size_bytes) ring_size = num_loops * dwords_per_loop; /* set default + shaders */ ring_size += 40; /* shaders + def state */ - ring_size += 5; /* fence emit for VB IB */ + ring_size += 7; /* fence emit for VB IB */ ring_size += 5; /* done copy */ - ring_size += 5; /* fence emit for done copy */ + ring_size += 7; /* fence emit for done copy */ r = radeon_ring_lock(rdev, ring_size); WARN_ON(r); |