hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bejoy.had...@gmail.com
Subject Re: Map Failure reading .gz (gzip) files
Date Tue, 15 Jan 2013 03:02:06 GMT
Hi Terry

When the file is unzipped and zipped, what is the number of map tasks running in each case?

If the file is large, I assume the below should be the case.

gz is not splttable compression codec so the whole file would be processed by a single mapper.
And this might be causing the job to hang as 1 task is not able to gracefully handle the logic
on such larger data.

When it is unzipped/uncompressed there would be multiple map tasks and each is handling the
respective data volume and processing logic gracefully.

------Original Message------
From: Terry Healy
To: user@hadoop.apache.org
ReplyTo: user@hadoop.apache.org
Subject: Map Failure reading .gz (gzip) files
Sent: Jan 15, 2013 02:55

I'm trying to run a Map-only job using .gz input format. For testing, I
have one compressed log file in the input directory. If the file is
un-zipped, the code works fine.

Watching the jobs with .gz input via the job tracker shows that the
mapper apparently has read the correct number of records (880,000), and
it reports 195,357 map output records just as it does if the input file
is un-zipped. But it then hangs until I finally kill the job.

And ideas what I'm missing?



Bejoy KS

Sent from remote device, Please excuse typos
View raw message