Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 242ED17FE6 for ; Mon, 6 Oct 2014 20:12:25 +0000 (UTC) Received: (qmail 63412 invoked by uid 500); 6 Oct 2014 20:12:20 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 63294 invoked by uid 500); 6 Oct 2014 20:12:20 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 63284 invoked by uid 99); 6 Oct 2014 20:12:19 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Oct 2014 20:12:19 +0000 X-ASF-Spam-Status: No, hits=2.4 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [217.70.183.197] (HELO relay5-d.mail.gandi.net) (217.70.183.197) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Oct 2014 20:12:13 +0000 Received: from mfilter25-d.gandi.net (mfilter25-d.gandi.net [217.70.178.153]) by relay5-d.mail.gandi.net (Postfix) with ESMTP id DC1A741C060 for ; Mon, 6 Oct 2014 22:11:51 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at mfilter25-d.gandi.net Received: from relay5-d.mail.gandi.net ([217.70.183.197]) by mfilter25-d.gandi.net (mfilter25-d.gandi.net [10.0.15.180]) (amavisd-new, port 10024) with ESMTP id cIHijc56FuDi for ; Mon, 6 Oct 2014 22:11:50 +0200 (CEST) X-Originating-IP: 78.228.212.43 Received: from [192.168.0.11] (mar92-17-78-228-212-43.fbx.proxad.net [78.228.212.43]) (Authenticated sender: hadoop@ulul.org) by relay5-d.mail.gandi.net (Postfix) with ESMTPSA id C686E41C054 for ; Mon, 6 Oct 2014 22:11:49 +0200 (CEST) Message-ID: <5432F784.3030400@ulul.org> Date: Mon, 06 Oct 2014 22:11:48 +0200 From: Ulul User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.1.2 MIME-Version: 1.0 To: user@hadoop.apache.org Subject: Re: Reduce phase of wordcount References: <-4690291296535386739@unknownmsgid> <5431BDBE.6050003@ulul.org> In-Reply-To: Content-Type: multipart/alternative; boundary="------------040908080501080407010809" X-Virus-Checked: Checked by ClamAV on apache.org This is a multi-part message in MIME format. --------------040908080501080407010809 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable Nice ! mapred.reduce.tasks affects the job (the group of tasks) so it should be=20 at least equal to mapred.tasktracker.reduce.tasks.maximum * With your setup you allow each of your 7 tasktrackers to launch 8=20 reducers (that would be 56) but you limit the total number of reducers=20 at 7... Combiners are very effective to limit shuffle overhead and for a job as=20 wordcount you can just use the reduce class. Just add something like job.setCombinerClass(MyReducer.class); to your driver and you're good Ulul Le 06/10/2014 21:18, Renato Moutinho a =C3=A9crit : > Hi folks, > > just as a feeback: increasing=20 > mapred.tasktracker.reduce.tasks.maximum had no effect (it was already=20 > set to 8) and the job created only 1 reducer (my original scenario).=20 > However, adding mapred.reduce.tasks and setting to some higher than 1=20 > value (I=C2=B4ve set to 7) made hadoop spawn that much reduce tasks (se= ven=20 > on my example) and the execution time went down to around 29 minutes=20 > (also, my servers are now frying cpu....) ! My next step (I=C2=B4m push= ing=20 > it to the maximum) is adding a combiner.. > > And no... I haven=C2=B4t setup this cluster just for running=20 > wordcount.Kkkkkkkk.... I'm still getting to know hadoop. :-) > > Thanks a lot for your help ! > > Regards, > > Renato Moutinho > > 2014-10-05 18:53 GMT-03:00 Ulul >: > > Hi > > You indicate that you have just one reducer, which is the default > in Hadoop 1 but quite insufficient for a 7 slave nodes cluster. > You should increase mapred.reduce.tasks use combiners and maybe > tune mapred.reduce.tasktracker.reduce.tasks.maximum > > Hope that helps > Ulul > > Le 05/10/2014 16:53, Renato Moutinho a =C3=A9crit : >> Hi there, >> >> thanks a lot for taking the time to answer me ! Actually, >> this "issue" happens after all the map tasks have completed (I'm >> looking at the web interface). I'll try to diagnose if it's an >> issue with the number of threads.. I suppose I'll have to change >> the logging configuration to find what's going on.. >> >> The only that's getting to me is the fact that the lines are >> repeated on the log.. >> >> Regards, >> >> Renato Moutinho >> >> >> >> Em 05/10/2014, =C3=A0s 10:52, java8964 > > escreveu: >> >>> Don't be confused by 6.03 MB/s. >>> >>> The relationship between mapper and reducer is M to N >>> relationship, which means the mapper could send its data to all >>> reducers, and one reducer could receive its input from all mapper= s. >>> >>> There could be a lot of reasons why you think the reduce copying >>> phase is too slow. It could be the mappers are still running, >>> there is no data generated for reducer to copy yet; or there is >>> no enough threads in either mapper or reducer to utilize >>> remaining cpu/memory/network bandwidth. You can google the >>> hadoop configurations to adjust them. >>> >>> But just because you can get 60M/s in scp, then complain only >>> getting 6M/s in the log is not fair to hadoop. You one reducer >>> needs to copy data from all the mappers, concurrently, makes it >>> impossible to reach the same speed as one to one point network >>> transfer speed. >>> >>> The reducer stage is normally longer than map stage, as data HAS >>> to be transferred through network. >>> >>> But in word count example, the data needs to be transferred >>> should be very small. You can ask the following question by >>> yourself: >>> >>> 1) Should I use combiner in this case? (Yes, for word count, it >>> reduces the data needs to be transferred). >>> 2) Do I use all the reducers I can use, if my cluster is under >>> utilized and I want my job to finish fast? >>> 3) Can I add more threads in the task tracker to help? You need >>> to dig into your log to find out if your mapper or reducer are >>> waiting for the thread from thread pool. >>> >>> Yong >>> >>> -----------------------------------------------------------------= ------- >>> Date: Fri, 3 Oct 2014 18:40:16 -0300 >>> Subject: Reduce phase of wordcount >>> From: renato.moutinho@gmail.com >>> To: user@hadoop.apache.org >>> >>> Hi people, >>> >>> I=C2=B4m doing some experiments with hadoop 1.2.1 running the >>> wordcount sample on an 8 nodes cluster (master + 7 slaves). >>> Tuning the tasks configuration I=C2=B4ve been able to make the ma= p >>> phase run on 22 minutes.. However the reduce phase (which >>> consists of a single job) stucks at some points making the whole >>> job take more than 40 minutes. Looking at the logs, I=C2=B4ve see= n >>> several lines stuck at copy on different moments, like this: >>> >>> 2014-10-03 18:26:34,717 INFO >>> org.apache.hadoop.mapred.TaskTracker: >>> attempt_201408281149_0019_r_000000_0 0.3302721% reduce > copy >>> (971 of 980 at 6.03 MB/s) > >>> 2014-10-03 18:26:37,736 INFO >>> org.apache.hadoop.mapred.TaskTracker: >>> attempt_201408281149_0019_r_000000_0 0.3302721% reduce > copy >>> (971 of 980 at 6.03 MB/s) > >>> 2014-10-03 18:26:40,754 INFO >>> org.apache.hadoop.mapred.TaskTracker: >>> attempt_201408281149_0019_r_000000_0 0.3302721% reduce > copy >>> (971 of 980 at 6.03 MB/s) > >>> 2014-10-03 18:26:43,772 INFO >>> org.apache.hadoop.mapred.TaskTracker: >>> attempt_201408281149_0019_r_000000_0 0.3302721% reduce > copy >>> (971 of 980 at 6.03 MB/s) > >>> >>> Eventually the job end, but this information, being repeated, >>> makes me think it=C2=B4s having difficulty transferring the parts >>> from the map nodes. Is my interpretation correct on this ? The >>> trasnfer rate is waaay too slow if compared to scp file transfer >>> between the hosts (10 times slower). Any takes on why ? >>> >>> Regards, >>> >>> Renato Moutinho > > --------------040908080501080407010809 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Nice !

mapred.reduce.tasks affects the job (the group of tasks) so it should be at least equal to mapred.tasktracker.reduce.tasks.maximum * <number of nodes>
With your setup you allow each of your 7 tasktrackers to launch 8 reducers (that would be 56) but you limit the total number of reducers at 7...

Combiners are very effective to limit shuffle overhead and for a job as wordcount you can just use the reduce class.
Just add something like job.setCombinerClass(MyReducer.class);
to your driver and you're good

Ulul

Le 06/10/2014 21:18, Renato Moutinho a =C3=A9crit=C2=A0:
Hi folks,

=C2=A0=C2=A0=C2=A0 just as a feeback: increasing mapred.tasktracker.reduce.tasks.maximum had no effect (it was already set to 8) and the job created only 1 reducer (my original scenario). However, adding mapred.reduce.tasks and setting to some higher than 1 value (I=C2=B4ve set to 7) made hadoop spawn that much reduce tasks (seven on my example) and the execution time went down to around 29 minutes (also, my servers are now frying cpu....) ! My next step (I=C2=B4m pushin= g it to the maximum) is adding a combiner..

And no... I haven=C2=B4t setup this cluster just for running wordcount.Kkkkkkkk.... I'm still getting to know hadoop. :-)
Thanks a lot for your help !

Regards,

Renato Moutinho

2014-10-05 18:53 GMT-03:00 Ulul <hadoop@= ulul.org>:
Hi

You indicate that you have just one reducer, which is the default in Hadoop 1 but quite insufficient for a 7 slave nodes cluster.
You should increase mapred.reduce.tasks use combiners and maybe tune
m= apred.reduce.tasktracker.reduce.tasks.maximum

Hope that helps
Ulul

Le 05/10/2014 16:53, Renato Moutinho a =C3=A9crit=C2= =A0:
Hi there,

=C2=A0 =C2=A0 =C2=A0thanks a lot for taking th= e time to answer me ! Actually, this "issue" happens after all the map tasks have completed (I'm looking at the web interface). I'll try to diagnose if it's an issue with the number of threads.. I suppose I'll have to change the logging configuration to find what's going on..

The only that's getting to me is the fact that the lines are repeated on the log..

Regards,

Renato Moutinho



Em 05/10/2014, =C3=A0s 10:52, java8964 <java8964@hotmail.com> escreveu:

Don't be confused by 6.03 MB/s= .

The relationship between mapper and reducer is M to N relationship, which means the mapper could send its data to all reducers, and one reducer could receive its input from all mappers.

There could be a lot of reasons why you think the reduce copying phase is too slow. It could be the mappers are still running, there is no data generated for reducer to copy yet; or there is no enough threads in either mapper or reducer to utilize remaining cpu/memory/network bandwidth. You can google the hadoop configurations to adjust them.

But just because you can get 60M/s in scp, then complain only getting 6M/s in the log is not fair to hadoop. You one reducer needs to copy data from all the mappers, concurrently, makes it impossible to reach the same speed as one to one point network transfer speed.

The reducer stage is normally longer than map stage, as data HAS to be transferred through network.

But in word count example, the data needs to be transferred should be very small. You can ask the following question by yourself:

1) Should I use combiner in this case? (Yes, for word count, it reduces the data needs to be transferred).
2) Do I use all the reducers I can use, if my cluster is under utilized and I want my job to finish fast?
3) Can I add more threads in the task tracker to help? You need to dig into your log to find out if your mapper or reducer are waiting for the thread from thread pool.

Yong


Date: Fri, 3 Oct 2014 18:40:16 -0300<= br> Subject: Reduce phase of wordcount
From: renato.moutinho@gmail= .com
To: user@hadoop.apache.or= g

Hi people,

=C2=A0=C2=A0=C2=A0 I=C2=B4m doing s= ome experiments with hadoop 1.2.1 running the wordcount sample on an 8 nodes cluster (master + 7 slaves). Tuning the tasks configuration I=C2=B4ve been able to make the map phase run on 22 minutes.. However the reduce phase (which consists of a single job) stucks at some points making the whole job take more than 40 minutes. Looking at the logs, I=C2=B4ve seen several li= nes stuck at copy on different moments, like this:

2014-10-03 18:26:34,717 INFO org.apache.hadoop.mapred.TaskTracke= r: attempt_201408281149_0019_r_000000_= 0 0.3302721% reduce > copy (971 of 980 at 6.03 MB/s) >
2014-10-03 18:26:37,736 INFO org.apache.hadoop.mapred.TaskTracke= r: attempt_201408281149_0019_r_000000_= 0 0.3302721% reduce > copy (971 of 980 at 6.03 MB/s) >
2014-10-03 18:26:40,754 INFO org.apache.hadoop.mapred.TaskTracke= r: attempt_201408281149_0019_r_000000_= 0 0.3302721% reduce > copy (971 of 980 at 6.03 MB/s) >
2014-10-03 18:26:43,772 INFO org.apache.hadoop.mapred.TaskTracke= r: attempt_201408281149_0019_r_000000_= 0 0.3302721% reduce > copy (971 of 980 at 6.03 MB/s) >

Eventually the job end, but this information, being repeated, makes me think it=C2=B4s having difficulty transferring the parts from the map nodes. Is my interpretation correct on this ? The trasnfer rate is waaay too slow if compared to scp file transfer between the hosts (10 times slower). Any takes on why ?

Regards,

Renato Moutinho



--------------040908080501080407010809--