summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorStéphane Marchesin <stephane.marchesin@gmail.com>2012-08-28 21:01:05 -0700
committerStéphane Marchesin <marcheu@chromium.org>2012-08-28 21:01:05 -0700
commitd67222443bd7ceb89690776207c095976d5c2476 (patch)
treeb07e0cfb0cf28ee235121c9a5d3766be32528ddd
parent128b257aeef70cfca0e897fbc49581cba13bde6b (diff)
Small changes again...
-rw-r--r--linuxgraphicsdrivers.lyx17
1 files changed, 11 insertions, 6 deletions
diff --git a/linuxgraphicsdrivers.lyx b/linuxgraphicsdrivers.lyx
index 2d51b7c..fd63b78 100644
--- a/linuxgraphicsdrivers.lyx
+++ b/linuxgraphicsdrivers.lyx
@@ -4684,12 +4684,13 @@ name "cha:Framebuffer-Drivers"
\lang english
Framebuffer drivers are the simplest form of graphics drivers under Linux.
A framebuffer driver is a kernel graphics driver exposing its interface
- through /dev/fb*.
+ through /dev/fb* (generally /dev/fb0).
This interface implements limited functionality (basically it allows setting
a video mode and drawing to a linear framebuffer), and the framebuffer
- drivers are therefore extremely easy to create.
+ drivers are therefore extremely simple and easy to create.
Despite their simplicity, framebuffer drivers are still a relevant option
- if the only thing you are after is a basic two-dimensional display.
+ if the only thing you are after is a basic two-dimensional display with no
+ bells and whistles.
It is also useful to know how framebuffer drivers work when implementing
framebuffer acceleration for a kernel modesetting DRM driver, as the accelerati
on callbacks are the same.
@@ -7977,8 +7978,8 @@ Common YUV color space formats
\begin_layout Standard
The final stage of video decoding is video upscaling.
- Video upscaling consists in upscaling the movie resolution to the screen
- resolution.
+ Video upscaling consists in upscaling a video frame from its native resolution
+ to the screen resolution.
It can be done by specialized hardware, the 3D engine, or the 2D blitter
if it has scaling capabilities.
@@ -8979,7 +8980,7 @@ Ten years ago, GPUs were a direct match with all the OpenGL or Direct3D
huge) and hardware designers (who faced an explosion of the number of specific
functionality a GPU needed), and shaders were created.
Instead of providing specific functionality, the 3D APIs would now let
- the programmers create these little programs and run them on the GPU.
+ the programmers create small shader programs and run them on the GPU.
As the hardware was now programmable in a way which was a superset of fixed
functionality, the fixed function pipelines were not required any more
and were removed from the cards.
@@ -8997,6 +8998,10 @@ emulate
Doing so in every driver would require a lot of code duplication, and the
Gallium model is to put this code in a common place.
Therefore gallium drivers become smaller and easier to write and to maintain.
+ The same concept is applied to other aspects of the 3D APIs like surface
+ handling; things like mipmap generation, surface copies and pixel format
+ conversions are all handled in common code and can be shared between
+ multiple drivers.
\end_layout
\begin_layout Standard