hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joshua Smith <j...@rationalpi.com>
Subject Re: question about processing large zip
Date Mon, 26 Mar 2012 01:19:20 GMT
As I understand it, zip isn't splittable format. You might consider using
bzip2 or another splittable compression format.

Alternatively, you could have one job that does the decompression chained
to another that does the.processing to get the parallelization.
On Mar 19, 2012 8:26 PM, "Andrew McNair" <andrew.mcnair@gmail.com> wrote:

> Hi,
> I have a large (~300 gig) zip of images that I need to process. My
> current workflow is to copy the zip to HDFS, use a custom input format
> to read the zip entries, do the processing in a map, and then generate
> a processing report in the reduce. I'm struggling to tune params right
> now with my cluster to make everything run smoothly, but I'm also
> worried that I'm missing a better way of processing.
> Does anybody have suggestions for how to make the processing of a zip
> more parallel? The only other idea I had was uploading the zip as a
> sequence file, but that proved incredibly slow (~30 hours on my 3 node
> cluster to upload).
> Thanks in advance.
> -Andrew

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message