Age | Commit message (Collapse) | Author | Files | Lines |
|
CVT reduced blanking modes are typically only seen on digital connections to
LCDs, but there are some monitors that report them as supported over the
VGA connector too, which is perfectly legitimate, electrically speaking.
|
|
This allows the autoconfig logic to fall through sanely on non-PCI machines,
which importantly includes Xen virtual machines.
|
|
Well, kinda. Strictly we prefer M_T_BUILTIN strongest since those are modes
where the driver has said it absolutely can't do anything else (VBE). Then
we look for user-defined modes, ie, modelines from the config file. Then
we consider modes reported by the monitor via EDID. Finally if nothing has
matched yet we consider the default mode pool.
Within each of the above-mentioned classes, modes with the M_T_PREFERRED bit
take priority over other modes in the same class.
This logic ensures that the timings sent to the monitor exactly match the
timings it reported as supported, which occasionally don't match the numbers
you might get for that mode from CVT or GTF.
|
|
This allows the server to guess an appropriate initial virtual size and
resolution. The heuristic is to select the largest driver-reported mode
that matches the monitor's physical aspect ratio. We revalidate this
estimate after mode validation, since we may have filtered away all
modes that would fill that size.
Also, the EDID preferred timing is now marked as M_T_PREFERRED as well.
|
|
Always add a mouse driver instance configured to send core events, unless
a core pointer already exists using either the mouse or void drivers. This
handles the laptop case where the config file only specifies, say,
synaptics, which causes the touchpad to work but not the pointing stick.
We don't double-instantiate the mouse driver to avoid the mouse moving twice
as fast, and we skip this logic when the user asked for a void core pointer
since that probably means they want to run with no pointer at all.
|
|
Also, synchronize that list with the list for the pseudoconfig file used
when starting with no config file. These really need to be better unified.
|
|
|
|
This was removed in the patch for bug #5386, but is still useful.
|
|
|
|
Base EDID only lets you specify the maximum dotclock in tens of MHz, which
is too fuzzy for some monitors. 1600x1200@60 is just over 160MHz, but if
the monitor really can't handle any mode at 170MHz, then 160 is more
correct. Fix up the EDID block before the driver can see it in this case,
so we don't spuriously reject modes.
|
|
|
|
|
|
|
|
|
|
The X gamma is used to set the output ramp of the card. Setting a 2.2 output
gamma going into a 2.2 monitor gives an effective gamma of 4.84, which is
very much not what you want.
|
|
|
|
Regenerate from glX_API.xml 1.3 from Mesa. The glproto package and libGL
(from Mesa) must also be updated.
|
|
X.Org Bugzilla #7641 <https://bugs.freedesktop.org/show_bug.cgi?id=7641>
Patch #6349 <https://bugs.freedesktop.org/attachment.cgi?id=6349>
|
|
pending resolution of #8232.
|
|
Added a non-zero test for one of the diagonal values.
|
|
It now recognizes scaled variants of the identity matrix, too.
|
|
|
|
broken for any 32 bit X server running on a 64 bit kernel) so #ifdef
them out for now. the PCI rework tree will make all this crap go away,
so I think we can tolerate the extra #ifdef for the next release.
|
|
instead of `/bin/sh /etc/init.d/xprint get_xpserverlist`
- allows the initscript to set its own different shell under #!
- allows disabling of XPSERVERLIST by making the script non-executable
* Allow files to be installed by using dist_*_DATA instead of EXTRA_DIST.
Also, use dist_*_SCRIPTS to install scripts.
* Fix minor typos in man pages.
|
|
See https://bugs.freedesktop.org/show_bug.cgi?id=7916
There may be a simpler, less intrusive fix that involves just rearranging
DRI locking between 2D and 3D drivers around VT switch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Regenerate from glX_API.xml 1.2. Add infrastructure to support
GLX_SGI_swap_control for AIGLX when the DRI driver enables it. Tested
with R300.
|
|
There were two sets of bugs in the vertex program (ARB and NV)
protocol. First, several of the ARB functions were missing the
'doubles_in_order="true"' annotation. Second, after the ARB decided
that glVertexAttrib*ARB functions must not alias fixed-function state
for GLSL, Nvidia re-assigned GLX protocol opcodes for
glVertexAttrib*NV (circa Septeber 2004). For some reason gl_API.xml
was never updated to reflect this, and the updated version of the
GL_NV_vertex_program spec never made into the registry.
This is just a server-side regeneration from gl_API.xml version 1.68.
|
|
|
|
|
|
GLX_EXT_texture_from_pixmap should always be enabled.
GLX_SGI_video_sync is only for direct rendering and should never
appear in the server's string.
|
|
|
|
GLX protocol isn't supported for GLX_SGI_swap_control or
GLX_SGI_video_sync. Remove them from the list of available extensions
until they are supported.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Re-generate from gl_API.xml 1.65. This provides the missing bits for
GL_EXT_texture_filter_anisotropic and GL_EXT_blend_equation_separate.
Enable those extensions.
|
|
Implement glGetProgramStringARB and glGetProgramStringNV. With these
functions implemented, GL_ARB_{vertex,fragment}_program,
GL_NV_{vertex,fragment}_program, and related extensions can be enabled.
|
|
Fill in __glXDisp_GetCompressedTexImageARB and
__glXDispSwap_GetCompressedTexImageARB to finish support for
GL_ARB_texture_compression. With this extension (and the related
compression extensions), the server-side GLX supports all of the
protocol for GL 1.4. w00t!
The bad news is that this has received only minimal testing, and Mesa
does not contain any good tests for GL_ARB_texture_compression.
|
|
GL/glx/g_disptab_EXT.h. Unfortunately GL/glx/g_disptab.h has to be
kept around a bit longer.
|
|
|