hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Subho Banerjee <subs.z...@gmail.com>
Subject Re: Hadoop in Pseudo-Distributed
Date Mon, 13 Aug 2012 07:07:31 GMT
Yes I did that already as I had mentioned in my previous mail, yet I am
having the problem.

I was just trying something, and I came across some really weird behaviour,
the moment I disconnect from my local network(unplug my LAN cable) and try
to run this, it works just fine. But when I am connected to my network, it
gives me the error I listed above.


On Mon, Aug 13, 2012 at 11:11 AM, Devaraj k <devaraj.k@huawei.com> wrote:

>  Can you go through this issue
> https://issues.apache.org/jira/browse/HADOOP-7489, It is discussed and
> provided some workarounds for this problem.
>
>
>
>
>
> Thanks
>
> Devaraj
>  ------------------------------
> *From:* Subho Banerjee [subs.zero@gmail.com]
> *Sent:* Monday, August 13, 2012 10:47 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop in Pseudo-Distributed
>
>    Hello,
>
> I am running hadoop v1.0.3 in Mac OS X 10.8 with Java_1.6.0_33-b03-424
>
>
>  When running hadoop on pseudo-distributed mode, the map seems to work,
> but it cannot compute the reduce.
>
> 12/08/13 08:58:12 INFO mapred.JobClient: Running job: job_201208130857_0001
> 12/08/13 08:58:13 INFO mapred.JobClient: map 0% reduce 0%
> 12/08/13 08:58:27 INFO mapred.JobClient: map 20% reduce 0%
> 12/08/13 08:58:33 INFO mapred.JobClient: map 30% reduce 0%
> 12/08/13 08:58:36 INFO mapred.JobClient: map 40% reduce 0%
> 12/08/13 08:58:39 INFO mapred.JobClient: map 50% reduce 0%
> 12/08/13 08:58:42 INFO mapred.JobClient: map 60% reduce 0%
> 12/08/13 08:58:45 INFO mapred.JobClient: map 70% reduce 0%
> 12/08/13 08:58:48 INFO mapred.JobClient: map 80% reduce 0%
> 12/08/13 08:58:51 INFO mapred.JobClient: map 90% reduce 0%
> 12/08/13 08:58:54 INFO mapred.JobClient: map 100% reduce 0%
> 12/08/13 08:59:14 INFO mapred.JobClient: Task Id :
> attempt_201208130857_0001_m_000000_0, Status : FAILED
> Too many fetch-failures
> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer
> returned HTTP response code: 403 for URL:
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stdout
> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer
> returned HTTP response code: 403 for URL:
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stderr
> 12/08/13 08:59:18 INFO mapred.JobClient: map 89% reduce 0%
> 12/08/13 08:59:21 INFO mapred.JobClient: map 100% reduce 0%
> 12/08/13 09:00:14 INFO mapred.JobClient: Task Id :
> attempt_201208130857_0001_m_000001_0, Status : FAILED
> Too many fetch-failures
>
> Here is what I get when I try to see the tasklog using the links given in
> the output
>
>
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stderr
>  --->
> 2012-08-13 08:58:39.189 java[74092:1203] Unable to load realm info from
> SCDynamicStore
>
>
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stdout
>  --->
>
> I have changed my hadoop-env.sh acoording to Mathew Buckett in
> https://issues.apache.org/jira/browse/HADOOP-7489
>
> Also this error of Unable to load realm info from SCDynamicStore does not
> show up when I do 'hadoop namenode -format' or 'start-all.sh'
>
> I am also attaching a zipped copy of my logs
>
>
>  Cheers,
>
> Subho.
>
>

Mime
View raw message