summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSøren Sandmann Pedersen <ssp@redhat.com>2011-03-31 15:23:00 -0400
committerSøren Sandmann Pedersen <ssp@redhat.com>2013-07-29 06:21:50 -0400
commitc55439c9dc4e1a074f98bda17c7cadf38a0be37f (patch)
tree42b7c8676bc66a17f4a682105c0b2a7659860b46
parentcd21bf0202f567515699341d112c4c7d349409d2 (diff)
fetch
-rw-r--r--fetch120
1 files changed, 120 insertions, 0 deletions
diff --git a/fetch b/fetch
new file mode 100644
index 00000000..dfadae64
--- /dev/null
+++ b/fetch
@@ -0,0 +1,120 @@
+pipeline
+
+- sampling grid is created using accessor functions
+
+- pixels are converted to a8r8g8b8
+
+- alpha channel is replaced if appropriate
+
+- grid is extended in all directions according to repeat mode
+
+- points between pixels are constructed according to the filter
+
+- transform
+
+- sample
+
+
+If you run in reverse, here is how it looks:
+
+ input: dst_x, dst_y
+
+ fetch_transformed(int,int)
+ transform into source coordinates
+
+ fetch_filtered(fixed,fixed)
+ for closest pixels (depending on filter)
+ fetch_repeat(int, int)
+ apply_repeat
+
+ fetch_alpha(int, int)
+ fetch_pixel()
+ depending on bpp, apply accessor
+
+ depending on format, convert to a8r8g8b8
+
+ if (alphamap)
+ if (within alpha)
+ alphamap->fetch_pixel (int, int)
+ replace alpha
+
+
+So fetch functions needed:
+
+ fetch_transformed
+ fetch_filtered
+ fetch_repeated
+ fetch_alpha
+ fetch_pixel
+ fetch_accessed
+
+Or it could be in one big function? Whatever.
+
+
+Notes on fetchers:
+
+Several things involve fetchers:
+
+- There is a desire to have subimages
+
+- There is a desire to have CPU specific fetchers
+
+- There is a desire to stop compiling the accessors twice.
+
+
+Proposal:
+
+Implementations get new functions that return a fetch_scanline
+function for 32 and 64 bit. There are also functions that return a
+store_scanline_32 and a store_scanline_64.
+
+By default these functions just delegate to an underlying
+implmenetation, but a CPU specific or fast path implmenetation can
+plug its own fetchers and storers in. The general implementation
+returns the existing fetchers.
+
+For gradients, the fetchers now become part of the general
+implementation, which leaves the pixman-gradient.c files almost
+empty. We can fix that by moving the constructors to pixman-image.c,
+and renaming the existing files to pixman-general-linear-gradient.c
+etc.
+
+A pre-requisite is that the 'classify' insanity should be fixed. This
+really shouldn't be rocket science. I don't think there is anything
+useful gained from having the (x, y, width, height) given to the
+classify functions, so we can simply make those another flag and
+compute it in a straight forward manner.
+
+Most of the bits image fetchers should be moved to
+fast-path-bits-image.c.
+
+That takes care of CPU specific fetchers.
+
+For subimages, we will have to bite the bullet and add a 'source clip'
+type rectangle. The repeat function will then have to use this
+rectangle for wrapping. Since subimages can extend beyond their
+parent, we will also need the final coordinates (before being
+converted to an address) to fit within the parent.
+
+This gets rather complicated to deal with within the framework of
+scanline fetching, so if we can move the general implementation to be
+pixel based instead, that would be convenient.
+
+Doing that also lets us stop compiling the acessors twice because we
+can now simply read into a variable on the stack using the accessor,
+and then convert that from whatever the format is.
+
+To avoid this becoming a performance issue, when the pixels are >= 8
+bits, we can simply adjust the pointer and width/height etc, and then
+require that the 'window' is equal to the full image for fast paths to
+run.
+
+Roadmap for fetchers:
+
+- Fix classification nonsense
+
+- Move all image constructors to pixman-image.c
+
+- Split in 32/64 selection in property_changed
+
+- Move \ No newline at end of file