summaryrefslogtreecommitdiff
path: root/libGLDriver.mdwn
diff options
context:
space:
mode:
authorJoe Rayhawk <jrayhawk@freedesktop.org>2013-04-13 16:15:48 -0700
committerJoe Rayhawk <jrayhawk@freedesktop.org>2013-04-13 16:15:48 -0700
commita6469900db6e4e7e1639d8318ec5f8fc504fb4ba (patch)
tree2c42f53b9b33e28efffa6f106f41cef0cbcca4b9 /libGLDriver.mdwn
parent6f87737aef4da968ebd11acf9155f3bc526e96ab (diff)
Mass conversion with moin2mdwn tool from git://git.koumbit.net/moin2iki.git
Diffstat (limited to 'libGLDriver.mdwn')
-rw-r--r--libGLDriver.mdwn45
1 files changed, 45 insertions, 0 deletions
diff --git a/libGLDriver.mdwn b/libGLDriver.mdwn
new file mode 100644
index 0000000..5887f21
--- /dev/null
+++ b/libGLDriver.mdwn
@@ -0,0 +1,45 @@
+
+
+### libGL (3D) Driver
+
+A DRI aware 3D driver currently based on Mesa.
+
+
+#### Where does the 3D Driver reside?
+
+Normally libGL loads 3D DRI drivers from the `/usr/X11R6/lib/modules/dri` directory but the search patch can be overridden by setting the `LIBGL_DRIVERS_PATH` environment variable.
+
+The DRI aware 3D driver resides in `xc/lib/GL/mesa/src/drv`
+
+
+## The DRI driver initialization process
+
+ * The whole process begins when an application calls glXCreateContext ([[xc/lib/GL/glx/glxcmds.c|http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/dri/xc/xc/lib/GL/glx/glxcmds.c?rev=HEAD&content-type=text/vnd.viewcvs-markup]]). glXCreateContext is just a stub that call [[CreateContext|CreateContext]]. The real work begins when [[CreateContext|CreateContext]] calls `__glXInitialize` ([[xc/lib/GL/glx/glxext.c|http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/dri/xc/xc/lib/GL/glx/glxext.c?rev=HEAD&content-type=text/vnd.viewcvs-markup]]).
+ * The driver specific initialization process starts with `__driCreateScreen`. Once the driver is loaded (via dlopen), dlsym is used to get a pointer to this function. The function pointer for each driver is stored in the _createScreen_ array in the `__DRIdisplay` structure. This initialization is done in driCreateDisplay ([[xc/lib/GL/dri/dri_glx.c|http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/dri/xc/xc/lib/GL/dri/dri_glx.c?rev=HEAD&content-type=text/vnd.viewcvs-markup]]), which is called by `__glXInitialize`. Note that `__driCreateScreen` really is the bootstrap of a DRI driver. It's the only [^1] function in a DRI driver that libGL directly knows about. All the other DRI functions are accessed via the `__DRIdisplayRec`, `__DRIscreenRec`, `__DRIcontextRec` and `__DRIdrawableRec` structs defined in [[xc/lib/GL/glx/glxclient.h|http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/dri/xc/xc/lib/GL/glx/glxclient.h?rev=HEAD&content-type=text/vnd.viewcvs-markup]]). Those structures are pretty well documented in the file.
+ * After performing the `__glXInitialize` step, [[CreateContext|CreateContext]] calls the createContext function for the requested screen. Here the driver creates two data structures. The first, GLcontext, contains all of the device independent state, device dependent constants (i.e., texture size limits, light limits, etc.), and device dependent function tables. The driver also allocates a structure that contains all of the device dependent state. The GLcontext structure links to the device dependent structure via the [[DriverCtx|DriverCtx]] pointer. The device dependent structure also has a pointer back to the GLcontext structure. The device dependent structure is where the driver will store context specific hardware state (register settings, etc.) for when context (in terms of OpenGL / X context) switches occur. This structure is analogous to the buffers where the OS stores CPU state where a program context switch occurs. The texture images really are stored within Mesa's data structures. Mesa supports about a dozen texture formats which happen to satisfy what all the DRI drivers need. So, the texture format/ packing is dependent on the hardware, but Mesa understands all the common formats. See Mesa/src/texformat.h. Gareth and Brian spent a lot of time on that.
+ * createScreen (i.e., the driver specific initialization function) is called for each screen from [[AllocAndFetchScreenConfigs|AllocAndFetchScreenConfigs]] ([[xc/lib/GL/glx/glxext.c|http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/dri/xc/xc/lib/GL/glx/glxext.c?rev=HEAD&content-type=text/vnd.viewcvs-markup]]). This is also called from `__glXInitialize`.
+ * For all of the existing drivers, the `__driCreateScreen` function is just a wrapper that calls `__driUtilCreateScreen` ([[xc/lib/GL/dri/dri_util.c|http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/dri/xc/xc/lib/GL/dri/dri_util.c?rev=HEAD&content-type=text/vnd.viewcvs-markup]]) with a pointer to the driver's API function table (of type `__DriverAPIRec`). This creates a `__DRIscreenPrivate` structure for the display and fills it in (mostly) with the supplied parameters (i.e., screen number, display information, etc.). It also opens and initializes the connection to DRM. This includes opening the DRM device, mapping the frame buffer (note: the DRM documentation says that the function used for this is called drmAddMap, but it is actually called drmMap), and mapping the SAREA. The final step is to call the driver initialization function for the driver (from the [[InitDriver|InitDriver]] field in the `__DriverAPIRec` (DriverAPI field of the `__DRIscreenPrivate`).
+ * The [[InitDriver|InitDriver]] function does (at least in the Radeon and i810 drivers) two broad things. It first verifies the version of the services (XFree86, DDX, and DRM) that it will use. The driver then creates an internal representation of the screen and stores it (the pointer to the structure) in the private field of the `__DRIscreenPrivate` structure. The driver-private data may include things such as mappings of MMIO registers, mappings of display and texture memory, information about the layout of video memory, chipset version specific data (feature availability for the specific chip revision, etc.), and other similar data. This is the handle that identifies the specific graphics card to the driver (in case there is more than one card in the system that will use the same driver).
+ * After performing the `__glXInitialize` step, [[CreateContext|CreateContext]] calls the createContext function for the requested screen. This is where it gets pretty complicated. I have only looked at the Radeon driver. radeonCreateContext ([[xc/lib/GL/mesa/src/drv/radeon/radeon_context.c|http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/dri/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_context.c?rev=HEAD&content-type=text/vnd.viewcvs-markup]]) allocates a GLcontext structure (actually `struct __GLcontextRec` from extras/Mesa/src/mtypes.h). Here it fills in function tables for virtually every OpenGL call. Additionally, the `__GLcontextRec` has pointers to buffers where the driver will store context specific hardware state (textures, register settings, etc.) for when context (in terms of OpenGL / X context) switches occur. The `__GLcontextRec` (i.e. GLcontext in Mesa) doesn't have any buffers of hardware-specific data (except texture image data if you want to be picky). All Radeon-specific, per-context data should be hanging off of the struct radeon_context. All the DRI drivers define a hardware-specific context structure (such as structure radeon_context, typedef'd to be radeonContextRec, or structure mga_context_t typedef'd to be mgaContext). radeonContextRec has a pointer back to the Mesa `__GLcontextRec` and Mesa's `__GLcontextRec->DriverCtx` pointer points back to the radeonContextRec. If we were writing all this in C++ (don't laugh) we'd treat Mesa's `__GLcontextRec` as a base class and create driver-specific derived classes from it. Inheritance like this is actually pretty common in the DRI code, even though it's sometimes hard to spot. These buffers are analogous to the buffers where the OS stores CPU state where a program context switch occurs. Note that we don't do any fancy hardware context switching in our drivers. When we make-current a new context, we basically update all the hardware state with that new context's values.
+ * When each of the function tables is initialized (see radeonInitSpanFuncs for an example), an internal Mesa function is called. This function (e.g., _swrast_[[GetDeviceDriverReference|GetDeviceDriverReference]]) both allocates the buffer and fills in the function pointers with the software fallbacks. If a driver were to just call these allocation functions and not replace any of the function pointers, it would be the same as the software renderer.
+ * The next part seems to start when the createDrawable function in the `__DRIscreenRec` is called, but I don't see where this happens. createDrawable should be called via glXMakeCurrent since that's the first time we're given an X drawable handle. Somewhere during glXMakeCurrent we use a DRI hash lookup to translate the X Drawable handle into an pointer to a `__DRIdrawable`. If we get a `NULL` pointer that means we've never seen that handle before and now have to allocate the `__DRIdrawable` and initialize it (and put it in the hash table). -- [[IanRomanick|IanRomanick]] and [[BrianPaul|BrianPaul]]
+
+### Of what use is the Mesa code in the xc tree?
+
+Mesa is used to build some server side modules/libraries specifically for the benefit of the DRI. The libGL is the client side aspect of Mesa which works closely with the server side components of Mesa.
+
+The GLU and GLUT libraries are entirely client side things, and so they are distributed separately.
+
+
+### Is there any documentation about the XMesa* calls?
+
+There is no documentation for those functions. However, one can point out a few things.
+
+First, despite the prolific use of the word "Mesa" in the client (and server) side DRI code, the DRI is not dependent on Mesa. It's a common misconception that the DRI was designed just for Mesa. It's just that the drivers that we at Precision Insight have done so far have Mesa at their core. Other groups are working on non-Mesa-based DRI drivers.
+
+In the client-side code, you could mentally replace the string "XMesa" with "Driver" or some other generic term. All the code below `xc/lib/GL/mesa/` could be replaced by alternate code. libGL would still work. libGL has no knowledge whatsoever of Mesa. It's the drivers which it loads that have the Mesa code.
+
+On the server side there's more of the same. The XMesa code used for indirect/software rendering was originally borrowed from stand-alone Mesa and its pseudo GLX implementation. There are some crufty side-effects from that.
+
+
+[^1] that's not really true- there's also the `__driRegisterExtensions` function that libGL uses to implement glXGetProcAddress. That's another long story.