xmlgraphics-fop-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeremias Maerki <dev.jerem...@greenmail.ch>
Subject Re: FOP character mapping problems
Date Thu, 01 May 2003 15:24:27 GMT
I may have caused a misunderstanding: There's no need to change anything
about Adobe Acrobat Reader. It's a matter of implementing the missing
functionality in FOP so FOP supports/creates the ToUnicode CMaps in PDF.
These ToUnicode CMaps enable the Acrobat Reader to properly extract text
using characters beyond Latin1/PDFDocEncoding (8-bit).

Working around the problem using special Type 1 fonts (PFM/PFB) may be a
solution though a rather hacky and complicated one.

Please contact me off-list if you're interested in funding the

On 01.05.2003 16:22:38 Mike Ferrando wrote:
> Jeremias,
> Thanks for your reply.
> I have read alot of these technical note pdf files. I have some
> questions.
> 1. If I do change my Adobe Reader so that it will be able to process
> the ttf into Unicode, will this mean that others who use the download
> version of Adobe Reader be able to see my characters correctly? (I
> presume not.) At present I can read, cut and paste, extended
> character sets from pdf documents at the RenderX site without doing
> anything to my Adobe Reader. (see: charents.pdf)
> http://www.renderx.com/testcases.html
> 2. I would be very interested in using this method (ToUnicode) to
> enable embedding the font into my document if the character encoding
> would also be embeded not just the glyphs. However, the instructions
> were not clear as to what files where to be "changed" and where these
> files were to be placed in the Adobe program folders. Further it was
> not clear what the result of making this change would be, local only
> or otherwise (see 1 above).
> 3. I would be very interested in a walk through or talk through if I
> could be sure that I would have embedded character encodings as a
> result of running FOP. I would even pay for a class on doing this if
> someone gave one. The literature is hardly clear to a user like me.
> 4. At present I am only able to use pfm/pfb fonts to embed character
> encodings into my pdf documents using FOP. (Yes, ttf fonts do appear
> correctly when extended characters are needed, but thier is no
> encoding just glyphs.) Now I am on the look out for pfm/pfb fonts
> that include characters beyond Latin 1. I can transform my XML using
> XSL and create an array of NCRs. This will become a merge document
> for my xsl-fo stylesheet to call up the particular fonts and place
> each character into an <fo:inline font-family="not-latin-1">&#
> 299;</fo:inline>. So calling up the correct fonts will only be a
> matter of writing a XSL stylesheet to pull and create the
> userconfig.xml for my conversions from the fo document. IOW I can get
> around the problem of having a need for many different fonts so that
> all my extended characters will have encodings embeded into the
> rendered PDF document. If there is an easier way, I would like to
> know of it (hence, ToUnicode).
> All this is basically summed up in the need to be able to embed
> encoding into the PDF document not just the glyphs. I have little
> interest in changing my Reader locally so that I can see ttf fonts.
> Any suggestions?

Jeremias Maerki

To unsubscribe, e-mail: fop-user-unsubscribe@xml.apache.org
For additional commands, e-mail: fop-user-help@xml.apache.org

View raw message