Age | Commit message (Collapse) | Author | Files | Lines |
|
This reverts commit b593737ed8349b280fa29242c35f565b59ab3025.
Apparently it causes GPU hangs on some image load store tests.
Let's turn it back off until we figure out why.
|
|
Stacking frames is for driver that's capable to do dual instances
encoding. Such feature is not enabled for B frames currently.
Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Cc: "11.1 11.2" <mesa-stable@lists.freedesktop.org>
|
|
This is where we handle texop_texture_samples so it makes things more
consistent.
|
|
There are a few different fixups that we have to do for texture
destinations that re-arrange channels, fix hardware vs. API mismatches, or
just shrink the result to fit in the NIR destination. These were all being
done in a somewhat haphazard manner. This commit replaces all of the
shuffling with a single LOAD_PAYLOAD operation at the end and makes it much
easier to insert fixups between the texture instruction itself and the
LOAD_PAYLOAD.
Shader-db results on Haswell:
total instructions in shared programs: 6227035 -> 6226669 (-0.01%)
instructions in affected programs: 19119 -> 18753 (-1.91%)
helped: 85
HURT: 0
total cycles in shared programs: 56491626 -> 56476126 (-0.03%)
cycles in affected programs: 672420 -> 656920 (-2.31%)
helped: 92
HURT: 42
|
|
We are no longer using anything from GLSL IR in the FS backend.
|
|
The fs_visitor::emit_texture helper originated when we still had both NIR
and IR visitors for the FS backend. Since the old visitor was removed,
emit_texture serves no real purpose beyond arbitrarily splitting
heavily-linked code across two functions.
|
|
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Reviewed-by: Eric Anholt <eric@anholt.net>
|
|
|
|
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Normally, we expect SIMD8 shaders to be more instructions than SIMD4x2
shaders, as it takes four instructions to operate on a vec4, rather than
a single instruction. However, the benefit is that it can process 8
objects per shader thread instead of 2.
Surprisingly, the shader-db statistics show an improvement in both
instruction and cycle counts:
Synmark: -31.25% instructions, -29.27% cycles, 0 hurt.
Tessmark: -36.92% instructions, -37.81% cycles, 0 hurt.
Unigine Heaven: -3.42% instructions, -17.95% cycles, 0 hurt.
Shadow of Mordor:
+13.24% instructions (26 with fewer instructions, 45 with more),
-5.23% cycles (44 with fewer cycles, 27 with more cycles).
Presumably, this is because the SIMD8 URB messages are a much more
natural fit than the SIMD4x2 URB messages - there's a ton less header
setup.
I benchmarked Shadow of Mordor and Unigine Heaven on my Skylake GT3e,
and the performance seems to be the same or increase ever so slightly
(< 1 FPS difference). So I believe it's strictly superior.
There's also a lot more optimization potential we can do in scalar mode.
This will also help us finish fp64 support, as scalar support is going
to land much sooner than vec4-mode support.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
There are a couple of cycle count changes in shader-db, but it's
basically a wash.
However, with the Broadwell scalar TCS backend enabled, many
Shadow of Mordor shaders benefit from this patch. Because we don't
batch up output writes for TCS, vec4 outputs might not have all
components defined. Many output writes have a value of undef,
which is useless.
With scalar TCS, stats for tessellation shaders on Broadwell:
total instructions in shared programs: 1283000 -> 1280444 (-0.20%)
instructions in affected programs: 34302 -> 31746 (-7.45%)
helped: 71
HURT: 0
total cycles in shared programs: 10798768 -> 10780682 (-0.17%)
cycles in affected programs: 158004 -> 139918 (-11.45%)
helped: 71
HURT: 0
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
shader-db statistics on Broadwell:
total instructions in shared programs: 8963409 -> 8962455 (-0.01%)
instructions in affected programs: 60858 -> 59904 (-1.57%)
helped: 318
HURT: 0
total cycles in shared programs: 71408022 -> 71406276 (-0.00%)
cycles in affected programs: 398416 -> 396670 (-0.44%)
helped: 199
HURT: 51
GAINED: 1
The only shaders affected were in Dota 2 Reborn.
It also sets up for the next optimization.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
This better reflects what it does. I plan to add other ALU
optimizations as well, so the old name would be confusing.
In preparation for that, also move the file comments about csels
above the opt_undef_csel function, and delete the ones about there
not being other optimizations.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
According to Timothy, using program_string_id == 0 to identify the
passthrough TCS is going to be problematic for his shader cache work.
So, change it to strcmp() the name at visitor creation time.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
|
|
Avoid % operator, since we know that curVertex is always incrementing.
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Fix static code analysis errors found by coverity on Linux
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Storing color hot tile to 8bit w-major stencil format.
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Fix windows in 32-bit mode when hyperthreading is disabled on Xeons.
Some support for asymmetric processor topologies.
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Need to do lazy eval of the threadviz knob since order of globals
is undefined.
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
|
|
This tells LLVM to always use SMEM loads for descriptors. It fixes a
regression in piglit's arb_shader_storage_buffer_object/execution/indirect.shader_test
that was caused by LLVM r268259 (but the proper fix is really here in Mesa).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
|
|
Beginning with commit 7b208a73, Unigine Valley began hanging the GPU on
Gen >= 8 platforms.
Evidently that commit allowed the scheduler to make different choices
that somehow finally ran afoul of a hardware bug in which POW and FDIV
instructions may not be followed by an instruction with two destination
registers (including compressed instructions). I presume the conditions
are more complex than that, but the internal hardware bug report (BDWGFX
bug_de 1696294) does not contain much more information.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94924
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> [v1]
Tested-by: Mark Janes <mark.a.janes@intel.com> [v1]
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
|
|
When gathering query results, swr_gather_stats was
unnecessarily stalling the entire pipeline. Results are now
collected asynchronously, with a fence marking completion.
Reviewed-By: George Kyriazis <george.kyriazis@intel.com>
|
|
This fixes: GL43-CTS.compute_shader.resource-ubo
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
We already are a GLintptr, casting won't help.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
|
|
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
|
|
The assert was null checking dest_arr_parent twice. The intention
seems to be to check both dest_ and src_.
Added in d3636da9
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
|
|
The function returns GLuint, not GLfloat values.
v2: also fix the OES function
Cc: "11.2" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
|
|
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
|
|
Silences warnings with 32-bit Linux gcc builds and MinGW which doesn't
recognize the ‘t’ conversion character.
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
|
|
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
|
|
v2:
* Declare loop index variable at loop site (idr)
* Make arrays of MI_MATH instructions 'static const' (idr)
* Remove commented debug code (idr)
* Updated comment in set_query_availability (Ken)
* Replace switch with if/else in hsw_result_to_gpr0 (Ken)
* Only divide GL_FRAGMENT_SHADER_INVOCATIONS_ARB by 4 on
hsw and gen8 (Ken)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
This matches the byte based offset of brw_load_register_mem*.
The function is also moved into intel_batchbuffer.c like
brw_load_register_mem*.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|