hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Kerzner <markkerz...@gmail.com>
Subject Re: Hadoop with many input/output files?
Date Thu, 22 Jan 2009 15:23:29 GMT
I have a very similar question: how do I recursively list all files in a
given directory, to the end that all files are processed by MapReduce? If I
just copy them to the output, let's say, is there any problem dropping them
all in the same output directory in HDFS? To use a bad example, Windows
chokes on many files in one directory.
Thank you,

On Thu, Jan 22, 2009 at 8:28 AM, Zak, Richard [USA] <zak_richard@bah.com>wrote:

> I am seeing the MultiFileInputFormat and the MultipleOutputFormat
> Input/Output formats for the Job configuration.  How can I properly use
> them?  I had previously used the default Input and Output Format types,
> which for my PDF concatenation project, merely reduced Hadoop to a
> scheduler.
> The idea is per directory, to concatenate all PDFs in said directory to
> one PDF, and for this I'm using iText.
> How can I use these Format types?  What would be in my input into the
> mapper and what would my InputKeyValue and OutputKeyValue classes be?
> Thank you!  I can't find documentation on these other than the Javadoc,
> which doesn't help much.
> Richard J. Zak

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message