summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSøren Sandmann Pedersen <ssp@redhat.com>2013-11-24 07:01:37 -0500
committerSøren Sandmann Pedersen <ssp@redhat.com>2013-11-24 07:01:37 -0500
commit369000012ea13bdca8f899f782bec820b44ae136 (patch)
tree89df1c042992139ec9f3e19a72f6a96313a0f3ae
parentb10cb1ddedf23cd6a53a65f05767c8783697c9d1 (diff)
redesign notes
-rw-r--r--docs/redesign.txt163
1 files changed, 55 insertions, 108 deletions
diff --git a/docs/redesign.txt b/docs/redesign.txt
index 6b5a57ee..9d7aad0b 100644
--- a/docs/redesign.txt
+++ b/docs/redesign.txt
@@ -25,7 +25,8 @@ As long as blend modes satisfy
we can define their effect on intersection of the alpha channels
intersection as simply Blend (1, 1) and still get the correct behavior
for the existing PDF operators. Also the existing Add can be defined
-that way.
+that way. This has the very nice property that the alpha channel and
+the RGB channels can be computed by the same expression.
Unfortunately, DIFFERENCE and EXCLUSION don't satisfy this, so it may
be interesting to instead define:
@@ -46,9 +47,12 @@ Divide-by-zero: This is an annoying problem. A way around it might be
to simply specify that "0" in an alpha channel is defined as epsilon,
where epsilon is a very very small positive value. This solves
divide-by-zero and allows formulas to be simplified. Need to find out
-whether the existing formulas can be interpreted this way.
+whether the existing formulas can be interpreted this way. The two
+things that need to be ensured are (a) 0/0 should be considered 0 in
+this formula: ad * as * B(s/as, d/ad), and (b) in the regular
+porter/duff operators, we want to make use of this: as * s/as = s.
-HDR/ Values above [0,1]. Another annoying problem. A lot of the blend
+HDR/Values above [0,1]. Another annoying problem. A lot of the blend
modes and operators really assume that everything happens within the
[0,1] interval. Can we specify a transformation that maps
-infinity,infinity into 0,1, and then call that before and after? The
@@ -56,127 +60,72 @@ answer is likely that HDR images should be converted to some linear
color space before blending, and then the result should be converted
back.
-Polygons:
----------
-
-- Intersect?
-- Other geometry ops?
-- Inverted fill rules
-
-API notes:
-----------
-
-- Distinction between surfaces and patterns.
-
-- surfaces can be mapped, unmapped, and marked dirty
-
-- "sampled" is a better name than "bits"
-
-Compositing:
-------------
-
-Compositing is asynchronous. That is, you have to call flush() before
-it will show up. The API reifies the clip region. Clipping is not
-supported for sources.
-
- composite (surface, operator, pattern, region);
-
-should be sufficient.
-
-The intended way to use this is to create lots of temporary surfaces
-that can then be used as sources.
-
-To clip a source image, simply source it to a temp image, then
-composite a region DEST_IN on top of it.
-
-When you change an image property such as transform/repeat/filter or
-format, it may be interesting to conceptually make a copy of the image
-(but internally do it with copy-on-write).
+Formats:
+--------
-Ie., you submit things like this:
+- Better support for > 32 bpp.
- new_image() -> s
- s.set_repeat -> s2
- s2.set_filter -> s3
- ...
- s(n-1).set_transform -> s(n)
+- Should these just be enums with a big table containing the
+ properties?
- composite (target, s, OP_OVER)
+- YUV: Are these formats or a separate image type? Considering that
+ there are different color matrices in use, it probably should be
+ considered a separate type.
-and no actual copying takes place, but if you wanted to, you could use
-the intermediate images as sources too. This may let us avoid having
-to track the case where an image is being used at the same time with
-two different set of properties. For deferred rendering, the copy
-would simply store a pointer to the current position in the list of
-deferred renderings.
+- Alpha-only formats should be considered to replicate the alpha value
+ in the rgb channels, and then component-alpha should be the only
+ compositing mode supported.
+Equation:
+---------
+It may or may not be interesting to use the equation
-Implementation notes:
+ mask? (src OP dest) : dest
-If the temp image is later transformed, the region turns into a
-non-antialiased polygon. Though maybe this doesn't work so well for
-scalings.
+instead of
-Basically: scaling both, then compositing is different from
-compositing, then scaling.
+ (src IN mask) OP dest
-Converting the polygon to spans might work though. Or to a region.
+that is, treat the mask as a clip on the operation rather than a
+preprocessing of the source.
-If a surface is represented internally as a list of "fragments", each
-with a corresponding compositing tree, then when such a surface is
-transformed, the region could simply be transformed as well by:
+Polygons:
+---------
- - converting it to a polygon
- - transforming that polygon
- - scan convert polygon without antialiasing
- - convert bitmap to region
+- Intersect?
+- Other geometry ops?
+- Inverted fill rules
-it wouldn't be extremely fast, but might not be terribly slow either.
-An alternative.
+API notes:
+----------
-Compositing transformed polygons is just a nasty problems because they
-have to be rasterized before transforming to get reasonable
-performance.
+- Distinction between surfaces and patterns.
-This might be a clue that polygons should not be considered
-image. Maybe they should be considered primtives instead. Potential
-primitives:
+- surfaces can be mapped, unmapped, and marked dirty
- - spans
- - polygons
- - regions
- - rectangles
- - triangles, traps
+- "sampled" is a better name than "bits"
-this is closer to how a GPU operates.
+- Go over region API to make sure names are sane and consistent. There
+ are some warts at the moment, such as
+ pixman_region_contains_rectangle() taking a pointer to a box.
-Another resolution could be that compositing with a polygon or a
-region conceptually always rasterizes that polygon immediately. Ie.,
-all compositing is conceptually immediate, and pixman must not do
-optimizations that break this, not even if such optimizations would
-result in higher image quality or faster performance. Before using a
-computed image as a transformed source it would have to be flattened.
+- Out-of-memory should create inert objects similar to cairo.
+- New prefix: pxm
-Out-of-memory:
---------------
-Should create inert objects similar to cairo.
+- Regions should possibly store (dx, dy) translations or the various
+ ops should allow a translate parameter:
+ pxm_region_union (dest, src, translate_x, translate_y)
-Formats:
---------
+ which would union src, translated by dx, dy, onto dest.
-- Better support for > 32 bpp.
+- regions could be specified as always reallocating:
-- Should these just be enums with a big table containing the
- properties?
+ new = pxm_region_union (old, src, dx, dy);
-- YUV: Are these formats or a separate image type? Considering that
- there are different color matrices in use, it probably should be
- considered a separate type.
+ would return new, which would be a reallocation of old.
-- Alpha-only formats: The alpha channel should be copied to the RGB
- channels.
Alpha vs. translucency
----------------------
@@ -208,24 +157,22 @@ meaningless: such a thing is called over. If you want to set the
"alpha" value in some area, you most likely actually want to set the
translucency value.
-
Source types:
-------------
+
- Sampled images
- gradients
- tri-meshes
- polygons
- regions
- (note: a transformed region becomes a non-antialiased polygon).
+ (a transformed region becomes an aliased polygon).
polygons and tri-meshes could have colors associated with their
corners.
-Regions:
---------
-- Should possibly store (dx, dy) translations.
-- Alternatively, the various ops should allow a translate parameter:
-
- pxm_region_union (dest, src, translate_x, translate_y)
+regions and polygons conceptually always rasterizes immediately. Ie.,
+all compositing is conceptually immediate, and pixman must not do
+optimizations that break this, not even if such optimizations would
+result in higher image quality or faster performance. Before using a
+computed image as a transformed source it would have to be flattened.
- which would union src, translated by dx, dy, onto dest.