Age | Commit message (Collapse) | Author | Files | Lines |
|
Signed-off-by: Yaakov Selkowitz <yselkowitz@users.sourceforge.net>
Reviewed-by: Søren Sandmann Pedersen <ssp@redhat.com>
|
|
This affects only Cygwin (on which only spiceqxl is supported), where
drivers must be linked against the Xorg implib. On other systems,
XORG_LIBS will be empty.
Signed-off-by: Yaakov Selkowitz <yselkowitz@users.sourceforge.net>
Reviewed-by: Søren Sandmann Pedersen <ssp@redhat.com>
|
|
Technically, the xorg/ prefix should not be specified. It generally works,
because xorg/ is usually hung off /usr/include. This enables
compliation that correctly respects a pkg-config --cflags xorg-server.
|
|
They depend on the PCI revision which is not available for Xspice.
|
|
This lets us continue to support older Xorg releases.
This reverts 4f37cd85 and partially reverts 4a43bd4.
|
|
When the device or the client are not capable of composite commands or a8
surfaces, don't issue these commands.
|
|
This commit adds support for using the new Composite command in spice
protocol 0.12.0. This command is similar to the Composite request in
the X Render protocol.
By implementing the UXA composite stubs, we get acceleration for most
common Composite requests, including glyphs.
|
|
a8 surfaces are now supported with the 8BIT_A format in spice, so we
can have support 8 bit pixmaps.
|
|
With the upcoming Render changes, we can no longer assume that the
fourth channel of images is unused.
|
|
If prepare_composite() fails, we need to free the temporary mask
before returning.
|
|
It is possbible for a pixmap to not be in video memory after
uxa_clear_pixmap() was called. When this happens, we need to destroy
the pixmap and return 1 to indicate that the operation can't be
accelerated.
|
|
Make all memory allocation functions take a string that will explain
what the memory will be used for. This allows debug print statements
to be added to the allocation functions and could later potentially be
used for more detailed statistics.
|
|
xserver introduces a new screen specific privates infrastructure, moving
the PRIVATE_PIXBUF over there, breaking qxl that was using the wrong
dixPrivatesSize to access it - there is a new array of screen specific/not
flags, and PRIVATE_PIXBUF is screen specific.
xorg-xserver commit: 9d457f9c55f12106ba44c1c9db59d14f978f0ae8
This fix breaks backward compat. The next release will only work with
xorg-xserver >= 1.12.99.901
RHBZ: 844463
|
|
Undo most of the damage from 7f8d3ed05cbe891
|
|
|
|
This make gnome-settings-daemon not switch resolution automatically to
the largest available.
|
|
|
|
|
|
During startup, the monitors are not yet enabled/set. and we
can avoid sending invalid/transient config.
|
|
Avoid sending many monitor config changes during qxl_create_desired_modes()
|
|
|
|
|
|
|
|
|
|
Most importantly, don't allow randr resize if it is too large for the
currently allocated mspace. Ifdeffed out almost working code for
reallocating the primary mspace (qxl->mem).
|
|
Taken from Virtual Box, following exactly the same logic:
gnome-settings-daemon relies on the serial given in the edid to set the
resolution to the same one last used on that screen. Since this is not
what we want with a virtual machine, we produce a serial that is
different for every resolution.
|
|
|
|
randr screen resize callback
|
|
|
|
Additionally prevents disabling of the primary crtc.
|
|
Send a MonitorsUpdate - this should definitely be split into it's own
patch.
Require revision 4 - this is needed just for MonitorsUpdate, should go
with it.
Adds new config: OPTION_NUM_HEADS, defaults to 4.
|
|
|
|
send/not send destroy message
|
|
|
|
|
|
I don't have a stacktrace to show any segfault unfortunately.
|
|
|
|
Both results from ProcFreePixmap being called in unanticipated
circumstances:
cache->all_surfaces is NULL
surface->host_image is NULL
To reproduce the following scripts work, in tandem:
create xterms, destroy them
chvt
============ xterm_test ============
import os
import subprocess
import time
import atexit
env = os.environ
env['DISPLAY'] = ':0.0'
xterms = []
def kill_all():
print "killing xterms"
for x in xterms:
x.kill()
del xterms[:]
atexit.register(kill_all)
while True:
for i in range(10):
xterms.append(subprocess.Popen(['xterm', '+u8']))
time.sleep(1)
kill_all()
============= chvt_test_helper ============
XPID=`pgrep Xorg`
XTTY=`find /proc/$XPID/fd -lname "/dev/tty*"`
XTTY=`readlink $XTTY`
XTTY=${XTTY#/dev/tty}
echo "chvt 1 (from Xorg)"
chvt 1
sleep 2
echo "chvt $XTTY (to Xorg)"
chvt $XTTY
============== chvt_test =================
while true; do ./chvt-test ; sleep 3; done
|
|
|
|
|
|
(s/SCREEN_ARG_TYPE/SCRN_ARG_TYPE/)
|
|
|
|
|
|
|
|
hard coding 100.
This was found while building with a modified X server; one with a PixmapRec size of 224, not 64 :-/.
|
|
My apologies for the churn; this is, I think, a slightly better patch than
my previous patch, 'Process watches even when there is no X activity', in that
it avoids doing an extra polling select when we're idle.
|
|
In summary, on vt enter we still:
reset
recreate memory slots
clear our mspace allocators
and then do what switch mode below says
On vt leave we still:
reset (this is redundant since the first VGA access will trigger a
reset on the device side)
On switch mode however we only:
destroy primary surface
create primary surface (different size)
|
|
The primary surface, i.e. qxl->primary, the only surface with id==0, is
allocated in qxl_surface_cache_create_primary with prev==next==NULL.
Unlinking it was producing a wrong cache->free_surfaces == NULL. This
was not a problem because unlinking the primary only happened in
switch_host, which then called surface_cache_init. In a following commit
switch_host is simplified to destroy-primary+create-primary, so this bug
needs to be fixed first to avoid leaking surfaces and reaching a no
surface available situation.
|
|
|
|
|