summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDaniel Vetter <daniel.vetter@ffwll.ch>2012-06-26 11:02:24 +0200
committerDaniel Vetter <daniel.vetter@ffwll.ch>2012-06-26 19:38:12 +0200
commit04aac1ecacf34284645734b184adb65592f9a87c (patch)
treeaa9eb86bd99f911db39e7e62d2715e0fe3a20b45
parent25c0af768c679e44587f4cf66934d00b84d70061 (diff)
drm/i915: "Flush Me Harder" required on gen6+for-QA
The prep to remove the flushing list in commit cc889e0f6ce6a63c62db17d702ecfed86d58083f Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Wed Jun 13 20:45:19 2012 +0200 drm/i915: disable flushing_list/gpu_write_list causes quite some decent regressions. We can fix this by setting the CS_STALL bit to ensure that the following seqno write happens only after the cache flush has completed. But only do that when the caller actually wants the flush (and not also when we invalidate caches before starting the next batch). I've looked through all our ancient scrolls about gen6+ pipe control workarounds, and this seems to be indeed a legal combination: We're allowed to set the CS_STALL bit when we flush the render cache (which we do). While yelling at this code, also pass back the return value from intel_emit_post_sync_nonzero_flush properly. v2: Instead of emitting more pipe controls, set the CS_STALL bit on the write flush as suggested by Chris Wilson. It seems to work, too. Cc: Eric Anholt <eric@anholt.net> Cc: stable@vger.kernel.org Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51429 Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-rw-r--r--drivers/gpu/drm/i915/intel_ringbuffer.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index f30a53a8917..dce4d1a492a 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -219,7 +219,9 @@ gen6_render_ring_flush(struct intel_ring_buffer *ring,
int ret;
/* Force SNB workarounds for PIPE_CONTROL flushes */
- intel_emit_post_sync_nonzero_flush(ring);
+ ret = intel_emit_post_sync_nonzero_flush(ring);
+ if (ret)
+ return ret;
/* Just flush everything. Experiments have shown that reducing the
* number of bits based on the write domains has little performance
@@ -233,6 +235,12 @@ gen6_render_ring_flush(struct intel_ring_buffer *ring,
flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
+ /*
+ * Ensure that any following seqno writes only happen when the render
+ * cache is indeed flushed (but only if the caller actually wants that).
+ */
+ if (flush_domains)
+ flags |= PIPE_CONTROL_CS_STALL;
ret = intel_ring_begin(ring, 6);
if (ret)