Age | Commit message (Collapse) | Author | Files | Lines |
|
v2: for some reason, the bigger size has more precision issues
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
|
|
Tests various texture sampling functions using constant 0 values for
the arguments. The i965 driver has optimisations for trailing 0
arguments to sampler messages so the intention is to test these code
paths.
|
|
The test doesn't support these cases yet, since the shaders
are hardcoded to use 2D samplers.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Testing with PBO currently fails on Mesa i965 driver.
V2: Fix a copy-paste error.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
Testing with PBO currently fails on Mesa i965 driver.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
while writing softpipe support I realised we didn't have tests for any of
this, this just adds a new parameter to texwrap to test texture offsets,
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
The code in question passes "-3" for the X offset
of a TexSubImage2D operation. TexSubImage2D is
defined to generate INVALID_VALUE for offsets that
are less than the negative border width, and
that is what the test expects. However,
TexSubImage2D is also defined to generate
INVALID_OPERATION for offsets that are not
multiples of 4 when using S3TC textures, as this
test does. Therefore, implementations can
legitimately generate either error here. To avoid
ambiguity, use -4 instead.
Passes on GeForce GTX 680 (binary driver 346.47)
Signed-off-by: James Jones <jajones@nvidia.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
|
|
The copyteximage test tries to render each layer of a 3D texture by
rendering a quad with the same z value in the texture coordinates for
each vertex. It was picking this value by just dividing the layer
number by the depth of the texture. I think this is wrong because it
will effectively point to the nearest face of the cube represented by
the 3D texel that we want. This point is equidistant between two
texels so it might be valid for an implementation to pick the other
texel. What we actually want is to set the texture coordinate to the
center of the texel. This patch makes it do that by just adding 0.5
before doing the division.
This doesn't seem to make any practical difference on Intel hardware
or with the software renderer in Mesa but at least in theory I think
it is more correct.
|
|
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Make sure to disable stencil textures everywhere we disable depth ones.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
This test reproduces the cause of Mesa Bug 89526 in Piglit. While
investigating 89526, it was discovered that Piglit had no tests that created a
cube map texture without glTexStorage2D and then called glGenerateMipmap on
it. For this reason, the offending commit was upstreamed before the failure
was caught.
This test successfully fails when commit 1ee000a is present and passes when
1ee000a is reverted.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
The textureProj tests multiply expected texture coordinates by the projector
in advance so that when the driver does the division we obtain the same
coordinates. However, the division can lead to small rounding errors that
can affect the selected layer and fail the tests. This is currently happening
on Intel hardware for all projector tests involving 3D textures.
When we test a 3D texture for texture level 0 we have 32 layers, which
means that each layer takes 1/32 = 0.03125 space in the [0, 1] texture
coordinate space. The test uses 0.5 for the Z coordinate, which is exactly
the boundary between layers 15 and 16 (16 * 0.03125 = 0.5). Because we
first multiply 0.5 by the projector in CPU and then we divide the coordinate
by the driver in the GPU, the result may be subject to rounding/precision
errors and if the result of this operation is even slighly smaller than 0.5
the hardware will select layer 15 instead of layer 16, leading to the
test failures we currently see, at least on Intel hardware, for all piglit
tests that involve textureProj with 3D textures.
The patch prevents this rounding from affecting the result of the test by
using 0.51 as the target texture coordinates instead of 0.5. Because 0.51
is 0.01 into layer 16, we are giving a small room for rounding/precision
errors that won't lead the hardware to select a different layer.
This fixes all projector tests on Intel hardware:
bin/tex-miplevel-selection *ProjGradARB 3D -fbo -auto
bin/tex-miplevel-selection *ProjLod 3D -fbo -auto
bin/tex-miplevel-selection textureProj 3D -fbo -auto
bin/tex-miplevel-selection textureProjGrad 3D -fbo -auto
bin/tex-miplevel-selection textureProjGradOffset 3D -fbo -auto
bin/tex-miplevel-selection textureProjOffset 3D -fbo -auto
bin/tex-miplevel-selection "textureProj(bias)" 3D -fbo -auto
bin/tex-miplevel-selection "textureProjOffset(bias)" 3D -fbo -auto
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=81405
|
|
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Laura Ekstrand <laura@jlekstrand.net>
|
|
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Laura Ekstrand <laura@jlekstrand.net>
|
|
By the time piglit_init() is executed, the context is creates and the dispatch
table has been initialised.
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Since this test only uses pre-compressed DXT5 data, it can safely run
with GL_ANGLE_texture_compression_dxt5. This extension is advertised
by at least the Mesa i965 driver even when libtxc_dxtn is not
installed.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
OpenGL actually allows creating a cube map with different sizes and
formats per-face. Though, it can't actually be used for texturing.
This test checks that no unexpected error is raised when such a
texture is created and that the GL actually keeps track of the per-face
sizes.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
|
|
Since GL_ABGR_EXT was extension number 1 to the GL spec, it didn't take
packed formats into account. As far as I can tell from the way the packed
formats extensions are written, packed formats with GL_ABGR_EXT isn't
allowed by the spec. NVIDIA allows it but AMD doesn't and our driver
hasn't allowed ith with UNSIGNED_INT_5_5_5_1 as of c471b09bf4. Let's stop
testing invalid things.
Tested-by: Mark Janes <mark.a.janes@intel.com>
Reviewed-by: Samuel Iglesias Gonsalvez <siglesias@igalia.com>
|
|
If the pbo option is given on the command line then the image sub-data
will instead be uploaded from a PBO rather than directly from the
malloc'd array. This is worth testing because the drivers often have
different code-paths for PBO uploads.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
If texsubimage is passed cube_map_array on the command line it will
try updating a subregion of a cube map array. All of the faces of all
of the layers of the texutre are rendered using a special vertex
shader to modify the texture coordinates so that they pick a
particular face based on the z coordinate modulo 6.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
textureSize.c: In function 'generate_GLSL':
textureSize.c:367:22: warning: 'gs' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (!vs || (gs_code && !gs) || !fs)
^
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
tex-miplevel-selection.c: In function 'piglit_init':
tex-miplevel-selection.c:919:58: warning: 'num_layers' may be used uninitialized in this function [-Wmaybe-uninitialized]
(gltarget != GL_TEXTURE_3D && layer == TEST_LAYER % num_layers)) {
^
tex-miplevel-selection.c:486:8: warning: 'type_str' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (!strcmp(type_str, "float"))
^
tex-miplevel-selection.c:764:4: warning: 'target_str' may be used uninitialized in this function [-Wmaybe-uninitialized]
sprintf(fscode, GL3_FS_CODE, version, target_str,
^
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
If the 'array' command line option is passed to the texsubimage test
it will now try updating subregions of 1D and 2D array textures. This
requires a shader to render.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
Previously the test was updating sub regions of the test texture with
the same values that were already in the texture. Therefore an
implementation that does nothing on glTexSubImage2D would pass which
doesn't seem very useful. This patch makes it create two reference
images with different values. At each update the entire texture is
recreated with the values from the first image and then a sub region
is updated with the values from the second image. The values are then
compared as before via glReadPixels but at each texel it decides
whether to compare with the first or second image depending on whether
the texel is in the updated region.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Previously when testing a 3D texture the test would just draw a single
image with the width and height of the texture and the z coordinates
set to span across the depth. This wouldn't end up drawing all of the
texels in the texture so instead it will now render all of the images
in a vertical line. In order to do this the test needs a taller window
than the default 160 pixels.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
When the texture target is GL_TEXTURE_3D the test wasn't updating a
sub-region of the texture due to what looks like a typo. The test was
passing anyway because the data it uploads is the same as the original
data so doing nothing is valid behaviour according to the test.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Silences Coverity report about missing break in switch.
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Fix self assignment defect reported by Coverity.
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-By: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
This way we can actually reuse it in {arb,ext}_timer_query
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
|
|
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
|
|
This avoids having to flush rendering after every texture operation.
Combined with a vc4 driver fix, reduces runtime of the test on
simulation from 160s to 44s.
It's still provoking a lot of flushes, because each slice of the 3d
texture uploads is being DISCARD_RANGE-mapped individually.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
There are only 4x4 texels, so let's just draw each one, and actually
sample them all.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Editing was irritating because indentation was wrong.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
IIRC they are not a standard C feature. At any rate, they aren't
accepted by MSVC.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Report PIGLIT_FAIL if there's an unexpected error.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Some formats had some "implied" precision which relied on the driver picking a
specific hw format (e.g. RGB4 and RGB5 both relied on driver picking 565).
These tests will now be skipped (for the exact test).
Others didn't have any implied precision but relied on driver picking a format
with at least 8 bits even though the internal format only implied 4.
Also, the exact tests for srgb8 and srgb8_alpha8 were failing too due to using
GL_BYTE instead of GL_UNSIGNED_BYTE.
With these fixes the only tests llvmpipe is failing seem to be the exact
GL_RGB8_SNORM and GL_RGB16_SNORM ones (1, 2 or 4 channels work and the errors
seem to be one bit so maybe triggers some conversion somewhere using different
signed conversion formula).
v2: fix indentation
Reviewed-by: Brian Paul <brianp@vmware.com>
Revuewed-by: Jason Exstrand <jason.ekstrand@intel.com>
|
|
The un_to_float function was trying to get the maximum value given a
number of bits by shifting ~0ul by the number of bits. For the
GL_UNSIGNED_INT type this function was also being used to get a
maximum value for a 32-bit quantity. However on a 32-bit build this
would mean that it is shifting a 32-bit integer (unsigned long is
32-bit) by 32 bits. The C spec leaves it undefined what happens if you
do a shift that is greater than the number of bits in the type. GCC
takes advantage of that to make the shift a no-op so the maximum was
ending up as zero and the test fails.
This patch makes it shift ~0u in the other direction so that it
doesn't matter what size unsigned int is and it won't try to shift by
32.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=83695
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Tested-by: Tapani Pälli <tapani.palli@intel.com>
|
|
Since we can't give a 12-bit input, trying to do an exact test doesn't
really make sense.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
This test tests most of the color conversions possible on glTexImage. The
test executable also takes a --benchmark flag to allow you to benchmark any
of the color conversion operations. It does not test any of transfer
operations nor does it test compressed or depth/stencil texture formats.
Signed-off-by: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
In the readpixels_rgba_as_lum test, use three values which don't nicely
convert to unsigned bytes and then sum them and assume that we'll be very
close to the floatin-point sum. If the implementation rounds down during
texture upload, the test used to fail. This switches it to use the
pixman-standard tollerance of 3.0 / 255.
Signed-off-by: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
llvmpipe passes, which is probably the only driver which implements it
correctly.
|
|
The test creates a texture with a block for each of the possible modes
of the half-float formats of BPTC and then retrieves it as a
half-float texture via glGetTexImage. This should test that the
decompressor works correctly with all possible code paths. It doesn't
test rendering the texture because that is a bit more tricky to do
accurately as it is not possible to create a half-float render target.
The data is generated randomly with an external program which also
generated the expected results. There are expected results for both
the signed and unsigned formats.
|
|
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
|
|
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
|