Age | Commit message (Collapse) | Author | Files | Lines |
|
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Dylan Baker <baker.dylan.c@gmail.com>
|
|
Reviewed-by: Jan Vesely <jan.vesely@rutgers.edu>
|
|
Reviewed-by: Jan Vesely <jan.vesely@rutgers.edu>
|
|
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
" FILTER: The support for filter types other than NEAREST or
NEAREST_MIPMAP_NEAREST for the resource is written to <params>.
This indicates if sampling from such resources supports setting the
MIN/MAG filters to LINEAR values. Possible values returned are
FULL_SUPPORT, CAVEAT_SUPPORT, or NONE. If the resource or operation
is not supported, NONE is returned."
On the case of FILTER there are well known cases defined by the OpenGL
spec where multi-texel filtering is not allowed:
* Multi-sample textures ( GL_TEXTURE_2D_MULTISAMPLE,
GL_TEXTURE_2D_MULTISAMPLE_ARRAY).
* Any resource using a integer internalformat
* Texture buffer objects
So in addition to check that it returns NONE for not supported
internalformat, we know that it should return NONE too for those
cases.
In other cases, it checks that the returned value is FULL_SUPPORT,
CAVEAT_SUPPORT or NONE.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails:
* Texture Buffer target is returning FULL_SUPPORT.
* Multi-texture targets is returning FULL_SUPPORT.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
Checks that:
* TEXEL_SIZE
* IMAGE_COMPATIBILITY_CLASS
* IMAGE_PIXEL_FORMAT
* IMAGE_PIXEL_TYPE
return the values defined for them in Table 3.22 of the OpenGL 4.2
specification depending on the "Image Format" passed.
For unsupported resources, this test checks that the returned value
is equal to the 'unsupported' response defined by the extension
specification.
v2: check for unsupported values, so testing this pnames could be
removed from generic-pnames
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails:
* IMAGE_TEXEL_SIZE
From spec:
The size of a texel when the resource when used as an image
texture is returned in <params>. This is the value from the
/Size/ column in Table 3.22. If the resource is not supported
for image textures, or if image textures are not supported, zero
is returned.
NVIDIA proprietary drivers returns 1, 2 or 4, a values not present
on Table 3.22.
* IMAGE_PIXEL_TYPE:
From spec:
The pixel type of the resource when used as an image texture is
returned in <params>. This is the value from the /Pixel type/
column in Table 3.22. If the resource is not supported for image
textures, or if image textures are not supported, NONE is
returned.
NVIDIA proprietary drivers returns in some cases
GL_UNSIGNED_INT_8_8_8_8_REV, that although defined on core spec 4.2,
it is not included on Table 3.22
* IMAGE_PIXEL_FORMAT:
From spec:
The pixel format of the resource when used as an image texture
is returned in <params>. This is the value from the /Pixel
format/ column in Table 3.22. If the resource is not supported
for image textures, or if image textures are not supported, NONE
is returned.
NVIDIA proprietary drivers returns in some cases GL_R11F_G11F_B10F,
that although defined on core spec 4.2, it is not included on Table
3.22
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
It is basically the same, but testing with both GetInternalformativ
and GetInternalformati64v, using the test_data struct defined at
common.h
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
This test add a check for the following pnames:
* TEXTURE_COMPRESSED_BLOCK_WIDTH
* TEXTURE_COMPRESSED_BLOCK_HEIGHT
* TEXTURE_COMPRESSED_BLOCK_SIZE
On all those three, query2 spec says the following:
"If the internal format is not compressed, or the resource is not
supported, 0 is returned."
We could have classified the existing internalformats on
compressed/non-compressed (similar to color-format/non-color-format
for COLOR_ENCONDING), but that seems pointless taking into account
that we already have TEXTURE_COMPRESSED to query if a internalformat
is compressed.
So this test queries TEXTURE_COMPRESSED and INTERNALFORMAT_SUPPORTED,
and if any of them is false, checks that the returned value for any of
those three pnames is zero.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
"COLOR_ENCODING:
<skip>
Possible values for color buffers are LINEAR or SRGB, for linear or
sRGB-encoded color components, respectively. For non-color formats
(such as depth or stencil), or for unsupported resources, the value
NONE is returned."
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
"GET_TEXTURE_IMAGE_TYPE:
<skip>
Possible values include any value that is legal to pass for the
<type> parameter to GetTexImage, or NONE if the resource does not
support this operation, or if GetTexImage is not supported."
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From query2 spec:
"GET_TEXTURE_IMAGE_FORMAT:
<skip>
Possible values include any value that is legal to pass for the
<format> parameter to GetTexImage, or NONE if the resource does not
support this operation, or if GetTexImage is not supported."
The possible list of values is the same that testing
TEXTURE_IMAGE_FORMAT, but depending on the gl version,
GL_STENCIL_INDEX is allowed too. This commits adds another list of
possible values. We could have added a clone method that added extra
values, but for now that would be an overkill. We could add more if we
find more similar cases.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails.
As with TEXTURE_IMAGE_FORMAT in some cases returns GL_R11F_G11F_B10F
or GL_RGB9_E5, that are internalformats, not formats.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
"TEXTURE_IMAGE_TYPE:
<skip>
Possible values include any value that is legal to pass for the
<type> parameter to the Tex*Image*D commands, or NONE if the
resource is not supported for this operation."
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
"Possible values include any value that is legal to pass for the
<format> parameter to the Tex*Image*D commands, or NONE if the
resource is not supported for this operation."
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails:
In some cases is returns GL_R11F_G11F_B10F or GL_RGB9_ES5, that are
internalformats, not valid formats.
Tested on AMD Radeon (TM) R9 380 Series: fails:
It returns values different to NONE on cases where the
target/internalformat are not supported (via
INTERNALFORMAT_SUPPORTED).
v2: rebased after changes in other commits
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
The ARB_internalformat_query2 specification says about
MAX_WIDTH, MAX_HEIGHT, MAX_DEPTH and MAX_COMBINED_DIMENSIONS <pnames>:
"If the resource is unsupported, zero is returned."
From the same specification:
"In the following descriptions, the term /resource/ is used
to generically refer to an object of the appropriate type that has
been created with <internalformat> and <target>."
We check if the /resource/ is supported by trying to create the object,
if an error is raised, we consider the /resource/ as "unsupported".
Before, we were only checking if the <internalformat> was supported
independently of the <target>.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: after this commit,
all the subtests on this test fails.
Now that a stricter check for unsupported resources are in place, the
test fails on NVIDIA because for some unsupported resources it returns
a value different to zero.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
"MAX_COMBINED_DIMENSIONS: The maximum combined dimensions for the
resource is returned in <params>. The combined dimensions is the
product of the individual dimensions of the resource. For
multisampled surfaces the number of samples is considered an
additional dimension. Note that the value returned can be >= 2^32
and should be queried with the 64-bit query.
<skip>
If the resource is unsupported, zero is returned."
In this case the returning value is a combination of the values of
MAX_WIDTH, MAX_HEIGHT, MAX_DEPTH, SAMPLES and the number of faces, so
the test can check that the returned value is correct in any case.
Note that there are cases that the combined value is greater that
2^32, but the test is also testing the 32-bit query. The spec doesn't
specify what the query should return in that case, so it is assumed
that any value would be correct.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails for cube
map related targets. From spec:
"For cube map targets this is the maximum combined width, height and
faces"
But NVIDIA propietary drivers is just combining width and height.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
GetInteger
In addition to the previous checks, this checks that the returned
values when supported are the same that the ones you receive calling
GetIntegerv with equivalent pnames like GL_MAX_TEXTURE_SIZE,
GL_MAX_3D_TEXTURE_SIZE, etc.
All those are internal format-independent, meanwhile GetInternalformat
allows to specify the internal format. So in theory there is the
possibility of being different for some internal format. But in
practice, this is not happening on any driver at this moment. Query2
spec mentions this case:
"7) There some <pnames> which it makes no sense to be qualified by
a per-format/target scope, how should we handle them?
e.g. MAX_WIDTH and MAX_HEIGHT might be the same for all formats.
e.g. properties like AUTO_GENERATE_MIPMAP and
MANUAL_GENERATE_MIPMAP might depend only on the GL version.
<skip>
A) Just use this entry point as is, if there are no per-format or
target differences, it is perfectly acceptable to have the
implementation return the same information for all valid
parameters. This does allow implementations to report caveats
that may exist for some formats but not others, even though all
formats/targets may be supported."
So at this point, taking into account the current implementation, it
makes sense to check against those values. Probably in the future this
check needs to be removed.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass
Tested on 4.5.13399 on AMD Radeon (TM) R9 380 Series: fails:
* For MAX_WIDTH:
* For 3d textures is returning MAX_TEXTURE_SIZE
* For texture buffers is returning crap. Different values for
each internalformat, including negative values.
* For MAX_HEIGHT:
* Ditto for 3d textures.
* For RENDERBUFFER and TEXTURE_2D_MULTISAMPLE_ARRAY is returning
0 even for supported values.
Anyway, take into account that even the basic dimension test (0 for
unsupported) was failing on ATI proprietary drivers.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
Add a check for the following pnames (a subtest for each one)
* MAX_WIDTH. From spec:
"The maximum supported width for the resource is returned in
<params>. For resources with only one-dimension, this one
dimension is considered the width. If the resource is unsupported,
zero is returned."
It is only tested that returns zero if not supported. It makes
sense to include it on this test because it is related to the
other max dimension pnames.
* MAX_HEIGHT. From spec:
"The maximum supported height for the resource is returned in
<params>. For resources with two or more dimensions, the second
dimension is considered the height. If the resource does not have
at least two dimensions, or if the resource is unsupported, zero
is returned."
So in addition to the usual zero-test if not supported, it is tested
that if the target has less that two dimensions, it returns zero
too, even if supported.
* MAX_DEPTH: From spec:
"The maximum supported depth for the resource is returned in
<params>. For resources with three or more dimensions, the third
dimension is considered the depth. If the resource does not have
at least three dimensions, or if the resource is unsupported, zero
is returned.
So in addition to the usual zero-test if not supported, it is tested
that if the target has less that three dimensions, it returns zero
too, even if supported.
* MAX_LAYERS: From spec:
"The maximum supported number of layers for the resource is
returned in <params>. For 1D array targets, the value returned is
the same as the MAX_HEIGHT. For 2D and cube array targets, the
value returned is the same as the MAX_DEPTH. If the resource does
not support layers, or if the resource is unsupported, zero is
returned."
In addition to the usual zero-test if not supported, it is tested
that the value returned is the same that the one returned by
MAX_HEIGHT or MAX_DEPTH for the array texture targets.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: passes.
Tested on 4.5.13399 on AMD Radeon (TM) R9 380 Series: the tests
doesn't pass for this ATI proprietary drivers for any of the
subtests. It returns a non-zero value for unsupported combinations.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
"IMAGE_FORMAT_COMPATIBILITY_TYPE: The matching criteria use for the
resource when used as an image textures is returned in
<params>. This is equivalent to calling GetTexParameter with
<value> set to IMAGE_FORMAT_COMPATIBILITY_TYPE. Possible values
are IMAGE_FORMAT_COMPATIBILITY_BY_SIZE or
IMAGE_FORMAT_COMPATIBILITY_BY_CLASS. If the resource is not
supported for image textures, or if image textures are not
supported, NONE is returned."
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails for the same
reasons it fails for INTERNALFORMAT_{X}_SIZE and INTERNALFORMAT_{X}_TYPE.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
" INTERNALFORMAT_RED_TYPE
INTERNALFORMAT_GREEN_TYPE
INTERNALFORMAT_BLUE_TYPE
INTERNALFORMAT_ALPHA_TYPE
INTERNALFORMAT_DEPTH_TYPE
INTERNALFORMAT_STENCIL_TYPE
For uncompressed internal formats, queries for these values return
the data type used to store the component. For compressed internal
formats the types returned specify how components are interpreted
after decompression. For textures this query returns the same
information as querying GetTexLevelParameter{if}v for TEXTURE_*TYPE
would return. Possible values return include, NONE,
SIGNED_NORMALIZED, UNSIGNED_NORMALIZED, FLOAT, INT, UNSIGNED_INT,
representing missing, signed normalized fixed point, unsigned
normalized fixed point, floating-point, signed unnormalized integer
and unsigned unnormalized integer components. NONE is returned for
all component types if the format is unsupported."
So this test calls GetInternalformat with INTERNALFORMAT_SUPPORTED:
* If it is false, checks that the returned value is zero
* If it is true, checks that all returned values are from that
specific set of values
* If it is true, checks that all the values are the same that
GetTexLevelParameter
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails for the
same reasons that for INTERNALFORMAT_{X}_SIZE tests.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
From spec:
" INTERNALFORMAT_RED_SIZE
INTERNALFORMAT_GREEN_SIZE
INTERNALFORMAT_BLUE_SIZE
INTERNALFORMAT_ALPHA_SIZE
INTERNALFORMAT_DEPTH_SIZE
INTERNALFORMAT_STENCIL_SIZE
INTERNALFORMAT_SHARED_SIZE
For textures this query will return the same information as
querying GetTexLevelParameter{if}v for TEXTURE_*_SIZE would return.
If the internal format is unsupported, or if a particular component
is not present in the format, 0 is written to <params>."
So this test calls GetInternalformat with INTERNALFORMAT_SUPPORTED:
* If it is false, it checks that the returned value is 0.
* If it is true, for texture targets, checks that the returned value
is the same that calling GetTexLevelParameter{if}v.
In order to call GetTexLevelParameter, it requires to create a texture
for the given internalformat/target combination. If that fails it is
assumed that the current implementation doesn't support that
combination, and it is tested against 0 too. Note that this means
that the test would output some error messages even if the test
passes.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: fails on the next
cases:
* Fails when trying to create a texture with some per-spec valid
internalformat, like 1D/3D textures with
GL_COMPRESSED_SIGNED_RED_RGTC1. Returns INVALID_ENUM, as if
TexCompressedImage1D were called.
* GL_TEXTURE_BUFFER has a limited ranged of internalformat supported
(table 3.15 from spec 4.2). So for example, creating a texture with
target GL_TEXTURE_BUFFER and internalformat GL_DEPTH_STENCIL would
fail. On those cases the query should return 0, but it is not
always the case.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
For those pnames, it verify that the defined on the spec is fulfilled:
NUM_SAMPLE_COUNTS
<skip>
If <internalformat> is not color-renderable, depth-renderable, or
stencil-renderable (as defined in section 4.4.4), or if <target>
does not support multiple samples (ie other than
TEXTURE_2D_MULTISAMPLE, TEXTURE_2D_MULTISAMPLE_ARRAY, or
RENDERBUFFER), 0 is returned.
SAMPLES
<skip>
If <internalformat> is not color-renderable, depth-renderable, or
stencil-renderable (as defined in section 4.4.4), or if <target>
does not support multiple samples (ie other than
TEXTURE_2D_MULTISAMPLE, TEXTURE_2D_MULTISAMPLE_ARRAY, or
RENDERBUFFER), <params> is not modified.
This test is really similar to api-errors test at
arb_internalformat_query, with the difference that on that case an
error was defined, and here was "softed" to return some specific
values. It also covers query1 overrun test.
Those two tests fails on the following proprietary drivers:
* 4.5.13399 on AMD Radeon (TM) R9 380 Series
* 4.5.0 NVIDIA 352.55 on GeForce GTX 950/PCIe/SSE2
For NUM_SAMPLES_COUNTS, they report a value different of 0 (5 on the
NVIDIA, 4 on ATI) for target/internalformat that should return 0
(example: GL_TEXTURE_1D+GL_COMPRESSED_RGB). It is worth to note that
for those cases, INTERNALFORMAT_SUPPORTED returns TRUE. It is not
clear if they provide multi-sample capabilities for those targets.
That seems to suggest that the paragraph was C&P from query1, and it
is being too restrictive, when the purpose of the query2 itself is "a)
provide a mechanism for implementations to declare support *above* the
minimum required by the specification".
For the SAMPLES case, it changes the first element of <params>,
for cases where it should remain unmodified.
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
The spec includes conditions for all the pnames. On most cases, it is
generic like this:
"Possible values returned are <set>. If the resource is not
supported, or if the operation is not supported, NONE is
returned."
So this test checks that for those pnames:
* If it is not supported (using INTERNALFORMAT_SUPPORTED), the
returned value is zero.
* If it is supported, and there is available a list of possible
values, the returned value is among one of those values.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass.
v2: IMAGE_TEXEL_SIZE, IMAGE_COMPATIBILITY_CLASS, IMAGE_PIXEL_TYPE,
IMAGE_PIXEL_FORMAT tested on a specific test (Antía)
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
Similar to the equivalent for arb_internalformat_query, but in
addition to test when a invalid pname/target combination should return
INVALID_ENUM, it also checks that a valid combination returns
NO_ERROR, and testing both GetInternalformativ and
GetInternalformati64v.
The rationale of this is that is really likely that the implementation
of arb_internal_format_query2 would reuse a lot of bits of
arb_internal_format_query, so we want to be sure that a combination
that was invalid with arb_internal_format_query is not considered
invalid by arb_internal_format_query2.
Tested on NVIDIA GeForce GTX 950 - NVIDIA 352.55: pass
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
arb_internalformat_query tests was taking into account query2, in
order to avoid testing some pnames/internalformats valid on query2.
But taking into account how deeply arb_internalformat_query2 extended
and changed the behaviour of GetInternalformativ, it is really more
clear to isolate both.
With this commit arb_internalformat_query tests will test when only
arb_internalformat_query is present, and arb_internalformat_query2
will test when both are present (as arb_internalformat_query2 has
arb_internalformat_query as a requirement).
This patch makes this one obsolete:
http://lists.freedesktop.org/archives/piglit/2015-October/017746.html
Acked-by: Dave Airlie <airlied@redhat.com>
|
|
v2:
- Move to tests/spec/glsl-1.30 (Timothy)
- Fix code style and rename some variables (Iago)
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
|
|
|
|
|
|
|
|
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
Only 2D images are supported yet.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
|
|
So that we can pass arguments to specify a format set name and an
option such as init-by-rendering.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94322
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
As an alternative to fbo_formats_init() which takes a single string.
fbo_formats_init() doesn't work for tests which may take both a format
set name and some other arguments.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
Move the error message and exit() into the later.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
This test puts the pos, size, rotation and color info for four objects
in an array which is stored in a UBO. For each drawing command, index
into the array to get the object parameters.
Signed-off-by: Brian Paul <brianp@vmware.com>
|
|
Now we can specify the format group on the command line and have it
actually work.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
Signed-off-by: Brian Paul <brianp@vmware.com>
|
|
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93840
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
|
|
Double and float derived types also need explicit
conversions.
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Jan Vesely <jan.vesely@rutgers.edu>
|
|
This adds a very simple test of atomicAdd on a shared variable.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Tested-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Tested-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
|
|
Test that glReadPixels for an area which reaches out of bounds
behaves like a glReadPixels into a sub-rectangle for the valid area.
This behavior reduces the number of corner cases associated with
this function.
v2 (Brian Paul):
- Rename test and change path
- Use larger window dimensions and compute big buffer dims at run-time
- Use bool instead of GLboolean
- Remove extra function arguments
- Free allocated data
- Exercise more clipping code
v3:
- Replace extra &= with = (Brian Paul)
- printf only when an error is triggered
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92193
Signed-off-by: Nanley Chery <nanley.g.chery@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
From the ARB_sample_shading spec:
"gl_NumSamples is the total
number of samples in the framebuffer, or one if rendering to a
non-multisample framebuffer"
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
|
|
The ARB_compute_shader spec says:
"If the work group count in any dimension is zero, no work groups
are dispatched."
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94100
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
|
|
The ARB_compute_shader spec says:
"If the work group count in any dimension is zero, no work groups
are dispatched."
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
|