xmlgraphics-fop-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Ferrando <mikeferra...@yahoo.com>
Subject Re: FOP character mapping problems
Date Thu, 01 May 2003 14:22:38 GMT
Jeremias,
Thanks for your reply.

I have read alot of these technical note pdf files. I have some
questions.

1. If I do change my Adobe Reader so that it will be able to process
the ttf into Unicode, will this mean that others who use the download
version of Adobe Reader be able to see my characters correctly? (I
presume not.) At present I can read, cut and paste, extended
character sets from pdf documents at the RenderX site without doing
anything to my Adobe Reader. (see: charents.pdf)
http://www.renderx.com/testcases.html

2. I would be very interested in using this method (ToUnicode) to
enable embedding the font into my document if the character encoding
would also be embeded not just the glyphs. However, the instructions
were not clear as to what files where to be "changed" and where these
files were to be placed in the Adobe program folders. Further it was
not clear what the result of making this change would be, local only
or otherwise (see 1 above).

3. I would be very interested in a walk through or talk through if I
could be sure that I would have embedded character encodings as a
result of running FOP. I would even pay for a class on doing this if
someone gave one. The literature is hardly clear to a user like me.

4. At present I am only able to use pfm/pfb fonts to embed character
encodings into my pdf documents using FOP. (Yes, ttf fonts do appear
correctly when extended characters are needed, but thier is no
encoding just glyphs.) Now I am on the look out for pfm/pfb fonts
that include characters beyond Latin 1. I can transform my XML using
XSL and create an array of NCRs. This will become a merge document
for my xsl-fo stylesheet to call up the particular fonts and place
each character into an <fo:inline font-family="not-latin-1">&#
299;</fo:inline>. So calling up the correct fonts will only be a
matter of writing a XSL stylesheet to pull and create the
userconfig.xml for my conversions from the fo document. IOW I can get
around the problem of having a need for many different fonts so that
all my extended characters will have encodings embeded into the
rendered PDF document. If there is an easier way, I would like to
know of it (hence, ToUnicode).

All this is basically summed up in the need to be able to embed
encoding into the PDF document not just the glyphs. I have little
interest in changing my Reader locally so that I can see ttf fonts.

Any suggestions?

Mike Ferrando
Washington, DC


--- Jeremias Maerki <dev.jeremias@greenmail.ch> wrote:
> I think this is the same problem as the one of Mark Dudley. It's
> simply
> the missing ToUnicode feature in FOP. So therefore I have no good
> suggestion right now other than encouraging interested parties to
> see to
> implementing ToUnicode tables.
> 
> On 25.04.2003 00:23:33 Mike Ferrando wrote:
> <snip/>
> > When the document is open in Acrobat 5, I try to search words
> that
> > appear in the Arial font. I get no results. Nothing is found by
> the
> > Acrobat search tool. However, if I transform all text in the Base
> > font (Times), and only the one character (&# 299;) in the "Arial"
> > font, I can find the whole word up to that character.
> 
> <snip/> 
> 
> > Any suggestions?
> 
> 
> Jeremias Maerki
> 
> 
>
---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-user-unsubscribe@xml.apache.org
> For additional commands, e-mail: fop-user-help@xml.apache.org
> 


__________________________________
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
http://search.yahoo.com

---------------------------------------------------------------------
To unsubscribe, e-mail: fop-user-unsubscribe@xml.apache.org
For additional commands, e-mail: fop-user-help@xml.apache.org


Mime
View raw message