Age | Commit message (Collapse) | Author | Files | Lines |
|
First pass at cleaning up pdfwrite's memory 'management'.
Add clean up code in pdf_close for fonts, font descriptors, type 3 CharProc
and Pattern resources.
Since we only need the object number for a reference we now create a new
type of cos object 'reference'. This only contains the object ID so that
we cna write out the reference. We also set the ID to 0 after we write it
as this will allow us to free the object. (id == 0 is a crazy reference
counting thing, it seems)
Free the 'aside' associated with a pattern after releasing it.
free ExtGState resources at close.
There was no code to free CMaps, none at all. Added routines to free regular
CMaps and ToUnicode CMaps, and added code to pdfwrite to call these in order
to actually free CMap resources.
When manufacturing a BaseFont, if we already have a BaseFont name, dispose
of it before assigning a new one. Previously this leaked the string
containing the name.
release font resoruce objects
when freeing a font descriptor, free the object as well as the glyphs
Free copied base font FontName string on close
This is opaque data specific to each font type, so we may need to add
specific cleanup routines, but this is a start.
Secondly, when pdfwrite copeis a font it makes 2 copies, a subset and a
complete copy. However the complete copy can fail because of an unused
glyph. So we dicard the complete copy and carry on with the subset. In
this case we didnt' clean up the 'complete' copy.
Modified the previous code into one routine to free copied fonts, when we
discard a (complete) copied font during font copying free the font copy.
free Encoding from copied fonts if present
Also, change the text for font freeing so it makes sense.
Free copied font 'data' when freeing copied font
Free the 'base_font' structure when freeing FontDescriptors
release colour spaces.
Make a routine to free colour spaces, and have it free the 'serialized'
color space memory.
Free the page dictionary when we free pages.
We seem to have (at least) two different kinds of param lists which are used
to deal with getting/setting device params. The PostScript interpreter uses
'ref_params' and the PCL interpreter uses 'c_params'.
The problem is that 'ref_params_end_write_collection' frees the list memory
but 'c_params_end_write_collection' does not. Since these are accessed through
methods in the list, we don't know whether we need to free the memory or not.
This leads to a memory leak when using the PCL interpreter.
I suspect this is a bug in the implementation, but for now I've modified
'ref_params_end_write_collection' so that it nulls the pointer to the list
when it frees it. The code in gdevdsp.c can then test to see whether the
memory needs to be freed (non-NULL) or not.
For some reason this leads to a Seg Fault with fts_09_0923.pdf, but I
can't see why. I believe this is unrelated, so will investigate it further
after this work is completed.
Also changed a typecast to eliminate a warning
create a routine to clean up the 'text data' and call it. Add the
'standard fonts' to the clenaup in there.
Clean up a number of allocations (name index stack, namespace
stack etc).
Add code to free Funtiocn resource dictionaries, objects and resources,
These were missed previously, because the development was done in PCL and
teh PCL interpreter can't trigger the use of Functions.
Add code to clean up Shading and group dictionary resources. Add code to
clear the resource chains on close so that we don't end up trying to use
freed memory pointers.
|
|
Raised on irc by Till Kamppeter, see Ubuntu bug :
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/955553
After much work by Chris Liddell and Bruce Stough it transpires that at
least some Brother printers have a bug in their CCITTFaxDecode filter,
especially with small amounts of data.
Since the ps2write otuput for bitmapped glyhs (which is required when the
input is a CIDFont) always uses the CCITTFax filter, this led to corrupted
output from the Brother (Note this is a bug in the *printer* not ps2write)
This patch adds a new command line parameter 'NoT3CCITT' which disables
compression of bitmapped glyphs. It should only be used with printers which
have a problem with CCITTFax data, and in that case should also be used with
the EncodeMonoImages switch to disable compression of monochrome images. Eg:
-dNoT3CCITT -dEncodeMonoImages=false
No differences expected as these are not tested by our regression tests
|
|
This is the first part of making it possible to produce PDF/A-2b output from
pdfwrite.
The PDFA switch has changed from a boolean to an integer, where the value gives
the level of PDF/A compatibility. This has knock-on effects throughout the
C and PostScript code which has been revised to expect an integer instead
of boolean value.
When PDFA has the value 2 we no longer flatten transparency, and we write
'2' in the pdfaid field in the XMP metadata.
PDF/A-1b output still seems to work correctly, but it is unlikely that the
work so far is sufficient for correct PDF/A-2 output.
No differences expected as the cluster does not test PDF/A output.
|
|
Patterns inside patterns were not working properly, because PDF and PostScript
handle this differently. opdfread.ps resets teh graphics state CTM to the
identity when drawing patterns, because the PDF spec says patterns are always
referenced to the default co-ordinate space and in order that the matrix gets
applied correctly we need to reset the CTM.
However, when the pattern is inside another pattern, the defualt co-ordinate
space is that of the enclosing pattern, so restting the CTM is a problem. We
can't simply avoid the reset as normal patterns would stop working. We can't
'undo#' the pattern matrix in PostScript, because we don't know what the
resolution scaling was.
So we track the pattern depth in ps2write, and the accumulated matrix transforms
from all the patterns so far. Then we apply that accumulated matrix to any new
pattern when the pattern depth is not zero.
This works, but is not 100% reliable, 2 patterns inside a single parent would
be concatenated, resulting in the second pattern being incorrect. However the
nested pattern situation is rare enough that I'm going to leave this as it is.
Expected Differences:
09_47N.pdf
Bug6901014_org_Chromium_AN03F.pdf
these files now work correctly with ps2write.
|
|
ps2write converts type 3 and 4 (masked) images into an image and clip
combination. The clip is created from the mask, and the image is rendered
to a memory device. Note that the memory device canvas is just large enough to
contain the image.
The image is drawn as a series of rectangular fills, and our 'local conveter'
device shifts these from the original page location to the correct (relocated to 0,0)
position in the memory device.
However, if interpolation is true for the image, then we don't get a series
of rectangular fills, we get a 'copy_color' instead which the converter
device didn't handle. Adding a copy_color method which properly translates the
image position solves the problem.
Expected Differences
Progressions in Bug691210.pdf, 12-07B.ps and 12-07C.ps
|
|
Bug #688267, #692243. The 'swapcolors' is now moved into pdfwrite, instead
of being performed in PostScript in the PDF interpreter. The 'spaced'
variants of show now perform simlar techniques to the 'pdfwrite' text rendering
routines, when teh device supports text rendering modes.
Caveats: pdfwrite always emits text enclosed in gsave/grestore. Because of
this we cannot preserve any of the text rendering modes involoving clipping
as the grestore also restores the clip path! So we subtract 4 from the mode
and emit the text that way, then handle the clip separately.
Because text_process doesn't expect to receive gs_error_Remap_Color errors
(which cause the interpreter to run Pattern PaintProcs usually) we can't
set the stroke colour during text processing (which is how it worked before).
Instead we set the colours during text_begin. We don't actually write the colours
to the PDF file at that point though, because that causes problems synchronising
graphics states. Instead we leave the emission of the colour unchanged, we
just evaluate the colours in text_begin.
There is some weirdness in the PDF interpreter which I do not understand.
Most cases are surrounded by 'currentlinewidth exch.......setlinewidth'
which preserves the current line width in case we have to change it for
stroked text. In one case, however, this causes files to fail with an error.
I have tried without success to unravel the PDF interpreter to figure out
what is going on. Since I can't work it out I have created a dictionary,
stored the linewidth in that, and then pulled it back out and restored at
the end. I did try wrapping a gsave/grestore round the operation (taking
care to preserve the modfified currentpoint) but that caused even more
problems. Again I have no idea why.
I would like Alex to look into this so I'm leaving one of the bugs open
and re-assigning to him. Also he will probably want to reformat the code
I've added to the PDF interpreter.
|
|
The old Procsets had to be moved from PostScript resources to C files, in order that
ps2write work with non-PostScript interpreters (XPS, PCL). As a result the
old OPDFReadProcsetPath is no longer used and has been removed.
|
|
limit.
Bug #692290 ps2write and pdfwrite have been using gp_open_scratch_file,
fseek and ftell, which limit the size of a temporary file to 4GB. This
commit uses gp_open_scratch_file_64, gp_ftell_64 and gp_fseek_64 whcih
should allow 64-bit file access on systems which support it.
Unfortunately I haven't been able to concoct a test for this, so the
64-bit code is not tested. However it continues to work normally with the
clustre regression tests.
|
|
Use the device level 'PS_accumulator' flag in various places instead of
the more kludgy test against penuym->pte_default being NULL.
If we don't get a glyph name back from the interpreter (PCL) then invent
one instead of giving up with an error.
If we are not a type 3 accumulator, then don't undo the factor of 100
scaling applied to the device width and height, we only do that for PS.
Add a routine to return a special 'initial' matrix during the course of
type 3 accumulation. The PCL stick font uses this to set the line width
and we need to account for various PS/PDF scaling which will otherwise
be ignored.
Make sure we don't try and accumulate a charproc when its being run for a
charpath operation.
|
|
Put back the matrix scaling in pdf_text_set_cache, even though the matrix
shoudl always be the identity here when running PCL. Best to be safe.
set_charproc_attrs emitted a 'd1' setcachedevice, but didn't check if
the glyph was flipped. For PCL this led to ury being less than lly, and
so the glyph was elided. Added check to make sure these are correct. This
required removal of 'const' from an arry as well.
|
|
Don't scale the CTM by 100 (done for FreeType) when handling PCL fonts
in install_charproc_accum, set the boolean to complete_charproc_accum so
that we don't 'undo' the factor of 100 scaling when the font is PCL.
Add code to set_charproc_attrs to determine whether this is a 'scale 100'
(ie PostScript) type 3 font or not, if its not then don't undo the scaling
by 100 of the CTM.
When accumulating a chraproc, before setting the CTM to identity matrix
also set the current point to 0,0, which ensures that that the current point
doesn't get baked into the character description. Also invalidate the
'char_tm' txy_fixed_valid member of the graphics state, this will force
a recalculation of char_tm using the new identity matrix.
|
|
First, break all the code for starting and stopping accumulators
into procedures, because the existing code is too hard to read.
|
|
Seems to be OK now with PS/PCL, does not crash any longer with PCL, but capture is incorrect.
|
|
|
|
that defines the operation enumeration for it.
Move existing calls of pattern_manage across to using dev_spec_op instead.
Add comments to the pattern management definitions noting that it is
deprecated and should not be used.
No cluster differences (aside from indeterminisms).
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@12326 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
parameter
Type32ToUnicode was incorrectly implemented. This should actually have used the
WantsToUnicode parameter, because the code actually controls the processing of
GlyphNames2Unicode tables from Windows PostScript.
This means we no longer need the Type32ToUnicode parameter and it has been removed.
Added initial documentation of these parameters.
This appears to cause some differences in Bug690829.ps rendered at 300 dpi.
This is a surprise, because the changes should have no effect on devices other than
pdfwrite/ps2write, but the new result is better than the old, so this is a progression.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@12286 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Still no differences expected....
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@12250 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
pdfwrite tests every (non-inline) image against every other stored image to see if it is
a duplicate, and if so does not embed the duplicate in the output but simply references
the original.
This can be slow for files with many images (each stored image must be checked when a
new image is encountered) and may be of limited benefit.
The new flag DetectDuplicateImages (default true) can be used to enable or disable
this behaviour
No differences expected
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@12170 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Defer the output of the header and the opdfread.ps 'prcoset' until the output file is
closed. This allows us to count the pages and emit the %%Pages: comment in the header
instead of at the end of the file.
Fix the PageBoundingBox comment to have two % characters instead of one.
Check for a new flag 'DSC_OPDFREAD' at the start of opdfread.ps (and write its
definition as part of the header emission in ps2write). If present this prevents the
SetupPageView routine from using setmatrix to reset the CTM to the one saved during
document setup. Because the DSC-compliant output puts save/restore around each page
we don't need to reset the matrix, and the reset prevents the output from working
properly with psnup. If the flag is not present, it is treated as false.
The output now works with GSView, psselect and psnup. The only remaining work is to
track the usage of ProcessDSC and see if we can reuse any of the comments parsed out
of the input.
No differences expected
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11951 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
This is a resubmission of revision 11941, with some additional changes so that it
doesn't crash with pdfwrite on Linux systems.
We now pass around the 'type' of an object much more when writing. This is so that
we can emit "%%BeginResource/%%EndResource" comment pairs around the resources we write.
It is also required so that we *don't* write these comments around pages.
The code now emits %%BeginProlog, then writes the opdfread.ps procedure. It then writes
all the various resources used in the document, each with a reasonable DSC comment. Then
it writes %%EndProlog. After this come the page descriptions, each is written with a
%%Page: comment and a %%PageTrailer. Finally we write the %%Trailer, %%Pages
comment (NB we write %%Pages: (atend) in the header comments as we don't know how many
pages there will be until the end) and %%EOF.
The resources are mostly defined as being of type 'file', as most of them are not normal
PostScript resources. The DSC specification says under the %%BeginResource definition
(file note on p72) "The enclosed segment is a fragment of PostScript language code or
some other item that does not fall within the other resource categories" and so this
seems the best type to use for our purposes.
The output is now minimally DSC compliant, though there are a few other comments I'd
like to add if possible. Given the way the file is created we are always going to have a
large prolog, and that will need to be copied to all the pages if they are split
individually, in order to make sure that all the required resources are present.
Technically we could follow the resource chain and write %%IncludeResource comments,
at the page level at least, but this is probably more effort than it is realistically
worth.
Still need to add some more DSC comment types, and run some extensive testing.
No differences expected currently. Minimal testing with GSView suggests that the output
so far is DSC-compliant as-is.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11946 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11944 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
We now pass around the 'type' of an object much more when writing. This is so that
we can emit "%%BeginResource/%%EndResource" comment pairs around the resources we write.
It is also required so that we *don't* write these comments around pages.
The code now emits %%BeginProlog, then writes the opdfread.ps procedure. It then writes
all the various resources used in the document, each with a reasonable DSC comment. Then
it writes %%EndProlog. After this come the page descriptions, each is written with a
%%Page: comment and a %%PageTrailer. Finally we write the %%Trailer, %%Pages
comment (NB we write %%Pages: (atend) in the header comments as we don't know how many
pages there will be until the end) and %%EOF.
The resources are mostly defined as being of type 'file', as most of them are not normal
PostScript resources. The DSC specification says under the %%BeginResource definition
(file note on p72) "The enclosed segment is a fragment of PostScript language code or
some other item that does not fall within the other resource categories" and so this
seems the best type to use for our purposes.
The output is now minimally DSC compliant, though there are a few other comments I'd
lie to add if possible. Given the way the file is created we are always going to have a
large prolog, and that will need to be copied to all the pages if they are split
individually, in order to make sure that all the required resources are present.
Technically we could follow the resource chain and write %%IncludeResource comments,
at the page level at least, but this is probably more effort than it is realistically
worth.
Still need to add some more comments, and run some extensive testing.
No differences expected currently. Minimal testing with GSView suggests that the output
so far is DSC-compliant as-is.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11941 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Bug #691779 "SegFault with pdfwrite and more than one cid font"
pdfwrite does lazy creation of Identity ToUnicode CMaps for inclusion in output PDF
files (2 CMaps, one for horizontal and one for vertical writing). These pointers were
not marked for the garabge collector, but were stored directly in the pdf device
structure.
The CMaps were assigned to a pdfont resource type, where the pointer to the CMap *was*
marked for the garbage collector. This meant that if the pdfont resource was moved as
a result of garbage collection, the CMap could be moved as well. This left a dangling
pointer in the device structure.
If another font resource required an identity CMap then the now garbage pointer from
the device structure would be assigned. If the new font resource was moved as a result
of garbage collection, then the attempt to relocate the CMap would fail and cause a
crash.
Fixed by marking the pointers in the device structure for the garbage collector.
No differences expected.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11920 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Because of the way that PCL draws bitmap fonts directly to the cache there is no
possibility of making uncached glyphs work properly. Also the code for cached glyphs is
much too forgiving and attempts to add glyphs which cannot be handled. Finally there is
no provision for type 3 fonts with non-identity matrices. Because the bitmaps in the
cache already have the scaling/rotation/shearing and clipping applied, we cannot have
a type 3 font with a non-identity matrix.
The code will be revised and recommitted.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11908 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
In general pdfwrite only resorts to making a bitmap from a font when it cannot handle
the original font type, which is rare for PostScript, PDF and XPS. However all PCL
bitmap fonts are handled this way.
When this happens, the bitmap is stored into a general type 3 font, a 'bucket' where all
such glyphs are stored. When this font is full, a new one is started and so on. The text
stored in the PDF page stream references the correct type 3 font, but usually the
character code will be unrelated to the original character code.
For PCL bitmap fonts pdfwrite actually starts by creating a type 3 font to hold the
PCL bitmaps, but doesn't use it. This patch tries to store the bitmaps in the type
3 font where possible, using the character code from the original PCL document.
Although this will not create searchable text in the general case, it does seem that
there are a good number of PCL documents which do use an ASCII encoding and so will
produce a searchable PDF file.
There are 3 parts to this enhancement:
1) Cached glyphs. When the current font is a type 3 font, and the text operation is
one which might result in an ASCII character code, and we can manufacture a glyph name
for the resulting character code, store the glyph in the type 3 font (rather than the
general 'bucket' font), using the character code and glyph name. Glyphs which can't
be handled this way for any reason are still stored in the general recipient 'bucket'
font.
2) Uncached glyphs. Glyphs which are too large for the cache are rendered as images. The
image handling code has been extensively reworked to try and detect this situation and,
if the criteria for cached glyphs above also holds true, to store the image as a glyph
in a type 3 font and draw text in the PDF content stream instead of an image. Images
which do not fulfil these criteria are still handled as images.
3) Recached glyphs. If the glyph cache fills up, glyphs will be flushed to make space.
If a glyph is then reused we go through the caching case again (for large glyphs which
are uncached we end up repeating the code every time the glyph is used). We now attempt
to spot this by determining that the glyph in the font has already been used, and rather
than storing a new copy of the glyph, as the old code did, we simply emit text into
the page content stream.
Note that there is a recommendation that inline images in PDF should not exceed 4KB.
Since CharProcs must use inline images, bitmaps which exceed this size will be rendered
as images, not text (they will also exceed the cache size and so are always rendered
uncached).
Expected Differences
A number of PCL files exhibit small differences at low resolution (75 dpi). These are
either; one pixel shifts in size or position due to the old code rendering an image with
a single matrix and the new code rendering text using two matrices and the attendant
loss of precision, or an 'emboldening' effect which seems to be due to the rendering
code treating a bitmap in a glyph differently to an image.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11901 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
to control
whether the output of ps2write is DSC-compliant or not.
No differences expected.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11827 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Bug #691556 "Images compressed with the RunLengthDecode filter are invalid" A typo in
gdevpdfx.h caused the /Filter entry of an image dictionary to be written with a
trailing comma if the filter was RunLength.
No differences expected
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11632 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11587 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11586 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Bug #691352 "cairo pdf mis-distilled"
Patterns in PDF are unpleasantly complicated by the need to transform the pattern to
the 'default co-ordinate space'. Normally this means that we undo the resolution scaling
which is normally applied to the CTM.
For page streams this works well, but for forms the 'default co-ordinate space' is
the space of the parent. For one level of form there is therefore no difference between
the page and the form. When forms are nested however, the lower form's space becomes
that of the parent. This means that patterns inside forms, which are nested inside
another form, need to be transformed to the parent form co-ordinate space, not the
page space.
Since we don't currently emit forms from pdfwrite for anything except transparency
groups what this means in practice is that we don't undo the CTM transformation if
we are rendering a pattern inside a form, nested inside at least one other form.
Expected Differences
These files show progressions with this change:
Buig690831.pdf
Bug690208.pdf
Bug690206.pdf
Bug690115.pdf
Catx5233.pdf
These files show progressions, but also show regressions or are still not properly
converted to PDF:
Bug688807.pdf
Bug689918.pdf
This does not conclude the work for bug #691352, and the two files which still exhibit
issues also require further work.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@11347 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
PDFA on
setpagedevice
Details
Bug #690803
The PDFA switch is stored in the page device dictionary, and the device member is reset
from the stored setting on each setpagedevice. This was defeating the
PDFACompatibilityPolicy setting which tried to disable PDF/A production when the input
could not be converted to a compliant PDF/A file.
Because the PostScript value is subject to save and restore, its all but impossible to
alter the PostScript saved value. So added a new device member 'AbortPDFAX' which can
only be set from C. This is set when non-compliant input is detected and prevents the
emission of (currently) PDF/A file (may add PDF/X in future). It does this by resetting
the device PDFA member every time the device parameters are set.
Expected Differences
None
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@10141 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
control PDF/A
creation when encountering invalid content.
Details
Bug #690500 "gs 8.63. Option dPDFA generating no pdf/a compliant files because of F
key."
In the reported issue the input PDF file contains a Link annotation with no /F (Flags)
key defined. PDF/A states that only annotations with the 'Print' bit of the Flags field
are permitted. pdfwrite was emitting the annotation, thus creating an invalid PDF/A
file.
The new switch PDFCompatibilityPolicy allows the user to select the behaviour for these
kinds of events. The value defaults to 0, where the behaviour matches Acrobat, the file
is created, but is not PDF/A compliant, a warning to this effect is given. When set to
1, the content will be omitted, resulting in a compliant PDF/A file, a warning is
given for each piece of omitted content. Values other than 0 or 1 are treated as 0 and
a warning is given that the value is not understood.
Currently only implemented for annotations, it is expected this will be extended over
time.
Expected Differences
None.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9876 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
DeviceN spaces to their alternate space.
Detalis:
bug #690582 "Disabel /Separation and /DeviceN colour space preservation"
When creating a colour space for output, allow the user to insist that Separation and
DeviceN spaces be converted to their alternate space.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9823 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Details
Bug #690430 "PCL to PDF creates non-searchable text with incorrect font bbox"
There were several changes which needed to be made here, in particular
1) Prevent the PDF text routines updating the text state when using type 3 bitmap
fonts, as this caused each glyph to be emitted individually.
2) Change the way bitmap glyphs are recorded and drawn, previously each glyph origin
was the top of the glyph, which meant that different height glyphs had effectively
differing baselines. The need to shift y position caused glyphs to appear on
different PDF 'lines', which prevented searching
3) Hone some internal heuristics which prevented large kerning values and caused glyphs
to be placed individually.
Expected Differences
A number of PostScript and PDF files exhibit slight shifts in glyph position, 1 pixel
x co-ordinate shifts at certain (especially low) resolutions. Also at low resolution
some glyphs are rendered (subsampling the bitmap) slightly differently.
"j:\tests\000368.pdf"
"j:\tests\09-34.PS"
"j:\tests\093-01.ps"
"j:\tests\13-01.PS"
"j:\tests\13-02.PS"
"j:\tests\13-22.PS"
"j:\tests\13-26.PS"
"j:\tests\13-27.PS"
"j:\tests\13-28.PS"
"j:\tests\16-15.PS"
"j:\tests\2004-02-26_Poro_melanin_v2_gradient_2.pdf"
"j:\tests\23-33.PS"
"j:\tests\30-11.PS"
"j:\tests\405-01.ps"
"j:\tests\450-01.ps"
"j:\tests\483-05 (2).ps"
"j:\tests\483-05-fixed.ps"
"j:\tests\483-05.ps"
"j:\tests\Altona.Page_3.2002-09-27.pdf"
"j:\tests\Altona.pdf"
"j:\tests\AltonaTech_1v2_pt2com_x3.pdf"
"j:\tests\Altona_Technical_1v1_x3.pdf"
"j:\tests\altona_technical_1v2_x3.pdf"
"j:\tests\Altona_Technical_x3.pdf"
"j:\tests\Bug687242.ps"
"j:\tests\Bug687603.ps"
"j:\tests\Bug687660a.ps"
"j:\tests\Bug687698.ps"
"j:\tests\Drive Slowly_3x2 Page 1.pdf"
"j:\tests\FT601DP1.PDF"
"j:\tests\J1-2P.prn"
"j:\tests\japan.ps"
"j:\tests\RipOptionen.pdf"
"j:\tests\rose1.ps"
"j:\tests\ww2.pdf"
"j:\tests\ww3.pdf"
"j:\tests\zeh_test_300dpi-unc-2.pdf"
A *very* large number of PCL files show the same 1-pixel shift or slight rendering
differences. For some reason 30-01.bin has improved (p24 renders correctly now).
There are too many PCL files exhibiting differences to list them individually.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9804 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
New flag
'DoNumCopies' added.
Details
Bug #690355 : "pdfwrite ignores the "#copies" setting in PostScript input"
The pdfwrite device follows the behaviour of Acrobat Distiller and ignores the /#copies
and /NumCopies settings in PostScript. This is at least partly because these can't be
expected to work properly with Destination annotations specifying a page as the
destination (eg Link or Outline annotations).
However CUPS (and potentially other PDF workflow applications) may have no way of
determining that application-produced PostScript requires multiple copies to be
printed, after the file has been converted to PDF.
For the benefit of such applications a new flag 'DoNumCopies' has been added, if
present then pdfwrite will duplicate each page in the output as many times as the
/#copies or /NumCopies current setting.
There is currently no provision for reordering the pages, the duplicates follow the
original immediately in page order.
Expected Differences
None
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9615 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Details:
bug #690100 "Enhancement request: create centered output when reading PDF"
Added a new switch to the collection of media handling controls supported by ps2write. The new CenterPages switch will center the output image on the media, regardless of the size of the media (if the output is larger than the media it will be truncated, but the image will still be centered on the media).
This switch is compatible with the RotatePages control, but not with SetPageSize or FitPages. Like the other switches it can be set as an argument to ps2write when creating a PostScript file (which fixes the result) or it can be supplied to the consuming interpreter in some fashion.
Updated ps2ps2.htm with the new details.
Expected Differences
None
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9450 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
(ps2write): media selection and %%BoundingBox generation
Details:
bug #690236 "Ghostscript is not able to convert PDF to PostScript
maintaining the input document's page sizes"
The pswrite bug is actually in the consuming applications, which are unable to
process the media selection requests in the pswrite output and instead use DSC
comments to select the media. pswrite attempts to use the bbox deice to generate a
%%BoundingBox comment, but because this is a high-level device, no marks are made, so
the BoundingBox is 0 0 0 0.
Fixed by using the media size instead if this case is detected.
ps2write did not generate a %%BoundingBox, or any other DSC comment at all, because it
does not produce DSC PostScript (there is a request to address this already #690064).
Also the media selection in the ps2write output is disabled by default, forcing users
to find some means to set particular keys in userdict on the target device.
Fixed by emitting a %%BoundingBox equal to the media size of the first page (NB as
this is not DSC, other pages will be selected incorrectly if this is all the
application uses). Also allow the use of the /SetPageSize, /RotatePages and /FitPages
keys during ps2write processing to emit a PostScript file with these keys already
set, thus allowing media selection to take place without further user intervention.
Expected Differences
None
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9373 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Details:
The new fillpage device method can be used to detect occurences of erasepage more reliably (and probably more cheaply in performance terms) than the previous checks in the fill_rectangle methods.
Removed the old code which checked the rectangle size against the page size, the colour against white, and the transparency stack depth, from the fill_rectangle methods. Added a new routine gdev_pdf_fillpage to implement the new fillpage device method.
Expected Differences
None
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9290 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
Details:
The duplicate (opaque pointer) typedef in gdevpdfx.h was
redundant and any users that dereferenced members or allocated
the struct on the stack already included sarc4.h.
I believe the idea here was just to hide the structure details
from some hypothetical user of the pdf_crypt api. However, the
stream state structure should either be opaque, with only the
typedef in sarc4.h or it should be public and using it requires
sarc4.h. The duplication isn't justified in either case.
Since stream state structures generally are public so they can
be stack-allocated, we decided to remove the duplicate typedef.
Also adds a missing dependency to the makefile.
Issue originally reported by the coverity hfa checker.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9210 a1074d23-0009-0410-80fe-cf8c14f379e6
|
|
PSSRC files are now in 'gs/psi'.
GLSRC files are now in 'gs/base'.
This is to facilitate build modularization and merging in the ghostpdl
tree.
NOTE: msvc32.mak is now in psi, not src.
git-svn-id: http://svn.ghostscript.com/ghostscript/trunk@9048 a1074d23-0009-0410-80fe-cf8c14f379e6
|