Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Laura Ekstrand <laura@jlekstrand.net>
|
|
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Laura Ekstrand <laura@jlekstrand.net>
|
|
By the time piglit_init() is executed, the context is creates and the dispatch
table has been initialised.
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Since this test only uses pre-compressed DXT5 data, it can safely run
with GL_ANGLE_texture_compression_dxt5. This extension is advertised
by at least the Mesa i965 driver even when libtxc_dxtn is not
installed.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
OpenGL actually allows creating a cube map with different sizes and
formats per-face. Though, it can't actually be used for texturing.
This test checks that no unexpected error is raised when such a
texture is created and that the GL actually keeps track of the per-face
sizes.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
|
|
Since GL_ABGR_EXT was extension number 1 to the GL spec, it didn't take
packed formats into account. As far as I can tell from the way the packed
formats extensions are written, packed formats with GL_ABGR_EXT isn't
allowed by the spec. NVIDIA allows it but AMD doesn't and our driver
hasn't allowed ith with UNSIGNED_INT_5_5_5_1 as of c471b09bf4. Let's stop
testing invalid things.
Tested-by: Mark Janes <mark.a.janes@intel.com>
Reviewed-by: Samuel Iglesias Gonsalvez <siglesias@igalia.com>
|
|
If the pbo option is given on the command line then the image sub-data
will instead be uploaded from a PBO rather than directly from the
malloc'd array. This is worth testing because the drivers often have
different code-paths for PBO uploads.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
If texsubimage is passed cube_map_array on the command line it will
try updating a subregion of a cube map array. All of the faces of all
of the layers of the texutre are rendered using a special vertex
shader to modify the texture coordinates so that they pick a
particular face based on the z coordinate modulo 6.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
textureSize.c: In function 'generate_GLSL':
textureSize.c:367:22: warning: 'gs' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (!vs || (gs_code && !gs) || !fs)
^
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
tex-miplevel-selection.c: In function 'piglit_init':
tex-miplevel-selection.c:919:58: warning: 'num_layers' may be used uninitialized in this function [-Wmaybe-uninitialized]
(gltarget != GL_TEXTURE_3D && layer == TEST_LAYER % num_layers)) {
^
tex-miplevel-selection.c:486:8: warning: 'type_str' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (!strcmp(type_str, "float"))
^
tex-miplevel-selection.c:764:4: warning: 'target_str' may be used uninitialized in this function [-Wmaybe-uninitialized]
sprintf(fscode, GL3_FS_CODE, version, target_str,
^
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
If the 'array' command line option is passed to the texsubimage test
it will now try updating subregions of 1D and 2D array textures. This
requires a shader to render.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
Previously the test was updating sub regions of the test texture with
the same values that were already in the texture. Therefore an
implementation that does nothing on glTexSubImage2D would pass which
doesn't seem very useful. This patch makes it create two reference
images with different values. At each update the entire texture is
recreated with the values from the first image and then a sub region
is updated with the values from the second image. The values are then
compared as before via glReadPixels but at each texel it decides
whether to compare with the first or second image depending on whether
the texel is in the updated region.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Previously when testing a 3D texture the test would just draw a single
image with the width and height of the texture and the z coordinates
set to span across the depth. This wouldn't end up drawing all of the
texels in the texture so instead it will now render all of the images
in a vertical line. In order to do this the test needs a taller window
than the default 160 pixels.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
When the texture target is GL_TEXTURE_3D the test wasn't updating a
sub-region of the texture due to what looks like a typo. The test was
passing anyway because the data it uploads is the same as the original
data so doing nothing is valid behaviour according to the test.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Silences Coverity report about missing break in switch.
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Fix self assignment defect reported by Coverity.
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-By: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
This way we can actually reuse it in {arb,ext}_timer_query
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
|
|
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
|
|
This avoids having to flush rendering after every texture operation.
Combined with a vc4 driver fix, reduces runtime of the test on
simulation from 160s to 44s.
It's still provoking a lot of flushes, because each slice of the 3d
texture uploads is being DISCARD_RANGE-mapped individually.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
There are only 4x4 texels, so let's just draw each one, and actually
sample them all.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Editing was irritating because indentation was wrong.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
IIRC they are not a standard C feature. At any rate, they aren't
accepted by MSVC.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Report PIGLIT_FAIL if there's an unexpected error.
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Some formats had some "implied" precision which relied on the driver picking a
specific hw format (e.g. RGB4 and RGB5 both relied on driver picking 565).
These tests will now be skipped (for the exact test).
Others didn't have any implied precision but relied on driver picking a format
with at least 8 bits even though the internal format only implied 4.
Also, the exact tests for srgb8 and srgb8_alpha8 were failing too due to using
GL_BYTE instead of GL_UNSIGNED_BYTE.
With these fixes the only tests llvmpipe is failing seem to be the exact
GL_RGB8_SNORM and GL_RGB16_SNORM ones (1, 2 or 4 channels work and the errors
seem to be one bit so maybe triggers some conversion somewhere using different
signed conversion formula).
v2: fix indentation
Reviewed-by: Brian Paul <brianp@vmware.com>
Revuewed-by: Jason Exstrand <jason.ekstrand@intel.com>
|
|
The un_to_float function was trying to get the maximum value given a
number of bits by shifting ~0ul by the number of bits. For the
GL_UNSIGNED_INT type this function was also being used to get a
maximum value for a 32-bit quantity. However on a 32-bit build this
would mean that it is shifting a 32-bit integer (unsigned long is
32-bit) by 32 bits. The C spec leaves it undefined what happens if you
do a shift that is greater than the number of bits in the type. GCC
takes advantage of that to make the shift a no-op so the maximum was
ending up as zero and the test fails.
This patch makes it shift ~0u in the other direction so that it
doesn't matter what size unsigned int is and it won't try to shift by
32.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=83695
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Tested-by: Tapani Pälli <tapani.palli@intel.com>
|
|
Since we can't give a 12-bit input, trying to do an exact test doesn't
really make sense.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
This test tests most of the color conversions possible on glTexImage. The
test executable also takes a --benchmark flag to allow you to benchmark any
of the color conversion operations. It does not test any of transfer
operations nor does it test compressed or depth/stencil texture formats.
Signed-off-by: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
In the readpixels_rgba_as_lum test, use three values which don't nicely
convert to unsigned bytes and then sum them and assume that we'll be very
close to the floatin-point sum. If the implementation rounds down during
texture upload, the test used to fail. This switches it to use the
pixman-standard tollerance of 3.0 / 255.
Signed-off-by: Jason Ekstrand <jason.ekstrand@intel.com>
|
|
llvmpipe passes, which is probably the only driver which implements it
correctly.
|
|
The test creates a texture with a block for each of the possible modes
of the half-float formats of BPTC and then retrieves it as a
half-float texture via glGetTexImage. This should test that the
decompressor works correctly with all possible code paths. It doesn't
test rendering the texture because that is a bit more tricky to do
accurately as it is not possible to create a half-float render target.
The data is generated randomly with an external program which also
generated the expected results. There are expected results for both
the signed and unsigned formats.
|
|
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
|
|
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
|
|
This creates a compressed texture with a block for each of 8 modes of
RGBA_UNORM BPTC compression. The texture is then both rendered and
read back via glGetTexImage and compared with the expected values.
|
|
GL_REPEAT is not legal for the rectangle targets, and because the test uses
sampler objects the default GL_REPEAT value will be used even for rectangle
target because unlike texture objects they can't be initialized to the legal
CLAMP_TO_EDGE value. According to the spec the texture actually should be
treated as incomplete in this case which mesa does not do (and I don't know if
anyone bothers enough to fix this) but some drivers (like llvmpipe) might
not treat unnormalized coords correctly in this case.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
|
|
This test verifies the bug fix in i965 drivers in mesa commit 984a02b.
Reproduces another bug in the driver with GL_DEPTH_COMPONENT16
internal format.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Fix a typo so that GL_PROXY_TEXTURE_RECTANGLE test actually tests that.
Signed-off-by: Jon TURNEY <jon.turney@dronecode.org.uk>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
|
|
rather than continuing. As it was, if something is majorly broken in
the driver, this test could report nearly 60,000 failure messages.
The results.json file is pretty huge in this case.
|
|
Always draw a (mode, filter) test pattern at the same window position,
regardless of border mode, supported, etc. This makes it a little
easier to do visual inspections.
In non-auto mode, print names of wrap modes to help identify the
test patterns. Ideally, we'd print this in the window instead of
the terminal but piglit doesn't have a text-drawing feature.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
|
|
There are no longer any source files in tests/util that are specific to
a particular OpenGL API. In other words, all OpenGL utility sources in
tests/util are now "common" and shared by all OpenGL APIs. So remove the
'common' in filename 'piglit-util-gl-common.h'.
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
|
|
|
|
Except for textureCubeGradARB, which the test doesn't support yet.
|
|
|
|
+ off-by-one error fix in check_result
|