summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDave Gordon <david.s.gordon@intel.com>2015-05-12 12:06:49 +0100
committerJohn Harrison <John.C.Harrison@Intel.com>2016-06-28 17:19:17 +0100
commitf41ecccac660b359474e896fd2d6bcfc0a93ecd9 (patch)
tree9164aa41db2907778f85808367b5b71e93797338
parent4f52a62a47369a88e6d68efa86384df61b032a2d (diff)
drm/i915: add i915_wait_request() call after i915_add_request_no_flush()
Per-context initialisation GPU instructions (which are injected directly into the ringbuffer rather than being submitted as a batch) should not be allowed to mix with user-generated batches in the same submission; it will cause confusion for the GuC (which might merge a subsequent preemptive request with the non-preemptive iniitalisation code), and for the scheduler, which wouldn't know how to reinject a non-batch request if it were the victim of preemption. Therefore, we should wait for the iniitalisation request to complete before making the newly-initialised context available for user-mode submissions. Here, we add a call to i915_wait_request() after each existing call to i915_add_request_no_flush() (in i915_gem_init_hw(), for the default per-engine contexts, and intel_lr_context_deferred_create(), for all others). Adapted from Alex's earlier patch, which added the wait only to intel_lr_context_render_state_init(), and which John Harrison was dubious about: "JH thinks this isn't a good idea. Why do we need to wait?". But we will need to after all, if only because of preemption. For: VIZ-2021 Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
-rw-r--r--drivers/gpu/drm/i915/i915_gem.c10
-rw-r--r--drivers/gpu/drm/i915/intel_lrc.c10
2 files changed, 20 insertions, 0 deletions
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index c4c1555e163e..9ae0114cfe70 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5501,6 +5501,16 @@ i915_gem_init_hw(struct drm_device *dev)
}
i915_add_request_no_flush(req);
+
+ /*
+ * GuC firmware will try to collapse its DPC work queue if the
+ * new one is for same context. So the following breadcrumb
+ * could be amended to this batch and submitted as one batch.
+ * Wait here to make sure the context state init is finished
+ * before any other submission to GuC.
+ */
+ if (i915.enable_guc_submission)
+ ret = i915_wait_request(req);
}
out:
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 873a5507d70d..8c7bd66d6008 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2819,6 +2819,16 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
goto error_ringbuf;
}
i915_add_request_no_flush(req);
+
+ /*
+ * GuC firmware will try to collapse its DPC work queue if the
+ * new one is for same context. So the following breadcrumb
+ * could be amended to this batch and submitted as one batch.
+ * Wait here to make sure the context state init is finished
+ * before any other submission to GuC.
+ */
+ if (i915.enable_guc_submission)
+ ret = i915_wait_request(req);
}
return 0;