hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Kerzner <mark.kerz...@shmsoft.com>
Subject Re: Can't achieve load distribution
Date Thu, 02 Feb 2012 01:06:24 GMT
Anil,

do you mean one block of HDFS, like 64MB?

Mark

On Wed, Feb 1, 2012 at 7:03 PM, Anil Gupta <anilgupta84@gmail.com> wrote:

> Do u have enough data to start more than one mapper?
>  If entire data is less than a block size then only 1 mapper will run.
>
> Best Regards,
> Anil
>
> On Feb 1, 2012, at 4:21 PM, Mark Kerzner <mark.kerzner@shmsoft.com> wrote:
>
> > Hi,
> >
> > I have a simple MR job, and I want each Mapper to get one line from my
> > input file (which contains further instructions for lengthy processing).
> > Each line is 100 characters long, and I tell Hadoop to read only 100
> bytes,
> >
> >
> job.getConfiguration().setInt("mapreduce.input.linerecordreader.line.maxlength",
> > 100);
> >
> > I see that this part works - it reads only one line at a time, and if I
> > change this parameter, it listens.
> >
> > However, on a cluster only one node receives all the map tasks. Only one
> > map tasks is started. The others never get anything, they just wait. I've
> > added 100 seconds wait to the mapper - no change!
> >
> > Any advice?
> >
> > Thank you. Sincerely,
> > Mark
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message