hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rishabh Agrawal <rishabh.agra...@impetus.co.in>
Subject RE: Hadoop in Pseudo-Distributed
Date Mon, 13 Aug 2012 10:59:38 GMT
Thanks Harsh. I think I have resolved the issue. Now another problem has come after I add

fuse-dfs#dfs://localhost:8020 <mount point> fuse allow_other,usetrash,rw 2 0

to fstab and execute mount <mount point> I get /bin/sh: fuse-dfs: not found

Any tip on that.

-Rishabh

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com]
Sent: Monday, August 13, 2012 2:47 PM
To: user@hadoop.apache.org
Subject: Re: Hadoop in Pseudo-Distributed

Subho,

Can you try to tweak the "mapred.task.tracker.http.address" in
mapred-site.xml, and set it to always bind to localhost? (i.e. set it
to "localhost:50060", instead of default "0.0.0.0:50060") and then see
if you get this behavior?

On Mon, Aug 13, 2012 at 12:37 PM, Subho Banerjee <subs.zero@gmail.com> wrote:
> Yes I did that already as I had mentioned in my previous mail, yet I am
> having the problem.
>
> I was just trying something, and I came across some really weird behaviour,
> the moment I disconnect from my local network(unplug my LAN cable) and try
> to run this, it works just fine. But when I am connected to my network, it
> gives me the error I listed above.
>
>
> On Mon, Aug 13, 2012 at 11:11 AM, Devaraj k <devaraj.k@huawei.com> wrote:
>>
>> Can you go through this issue
>> https://issues.apache.org/jira/browse/HADOOP-7489, It is discussed and
>> provided some workarounds for this problem.
>>
>>
>>
>>
>>
>> Thanks
>>
>> Devaraj
>>
>> ________________________________
>> From: Subho Banerjee [subs.zero@gmail.com]
>> Sent: Monday, August 13, 2012 10:47 AM
>> To: user@hadoop.apache.org
>> Subject: Hadoop in Pseudo-Distributed
>>
>> Hello,
>>
>> I am running hadoop v1.0.3 in Mac OS X 10.8 with Java_1.6.0_33-b03-424
>>
>>
>> When running hadoop on pseudo-distributed mode, the map seems to work, but
>> it cannot compute the reduce.
>>
>> 12/08/13 08:58:12 INFO mapred.JobClient: Running job:
>> job_201208130857_0001
>> 12/08/13 08:58:13 INFO mapred.JobClient: map 0% reduce 0%
>> 12/08/13 08:58:27 INFO mapred.JobClient: map 20% reduce 0%
>> 12/08/13 08:58:33 INFO mapred.JobClient: map 30% reduce 0%
>> 12/08/13 08:58:36 INFO mapred.JobClient: map 40% reduce 0%
>> 12/08/13 08:58:39 INFO mapred.JobClient: map 50% reduce 0%
>> 12/08/13 08:58:42 INFO mapred.JobClient: map 60% reduce 0%
>> 12/08/13 08:58:45 INFO mapred.JobClient: map 70% reduce 0%
>> 12/08/13 08:58:48 INFO mapred.JobClient: map 80% reduce 0%
>> 12/08/13 08:58:51 INFO mapred.JobClient: map 90% reduce 0%
>> 12/08/13 08:58:54 INFO mapred.JobClient: map 100% reduce 0%
>> 12/08/13 08:59:14 INFO mapred.JobClient: Task Id :
>> attempt_201208130857_0001_m_000000_0, Status : FAILED
>> Too many fetch-failures
>> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer
>> returned HTTP response code: 403 for URL:
>> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stdout
>> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer
>> returned HTTP response code: 403 for URL:
>> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stderr
>> 12/08/13 08:59:18 INFO mapred.JobClient: map 89% reduce 0%
>> 12/08/13 08:59:21 INFO mapred.JobClient: map 100% reduce 0%
>> 12/08/13 09:00:14 INFO mapred.JobClient: Task Id :
>> attempt_201208130857_0001_m_000001_0, Status : FAILED
>> Too many fetch-failures
>>
>> Here is what I get when I try to see the tasklog using the links given in
>> the output
>>
>>
>> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stderr
>> --->
>> 2012-08-13 08:58:39.189 java[74092:1203] Unable to load realm info from
>> SCDynamicStore
>>
>>
>> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stdout
>> --->
>>
>> I have changed my hadoop-env.sh acoording to Mathew Buckett in
>> https://issues.apache.org/jira/browse/HADOOP-7489
>>
>> Also this error of Unable to load realm info from SCDynamicStore does not
>> show up when I do 'hadoop namenode -format' or 'start-all.sh'
>>
>> I am also attaching a zipped copy of my logs
>>
>>
>> Cheers,
>>
>> Subho.
>>
>>
>



--
Harsh J

________________________________

Impetus webinar: Designing a Test Automation Framework for Interoperable Systems; July 25
(10:00am PT). http://lf1.me/0E/

Follow us on www.twitter.com/impetustech


NOTE: This message may contain information that is confidential, proprietary, privileged or
otherwise protected by law. The message is intended solely for the named addressee. If received
in error, please destroy and notify the sender. Any use of this email is prohibited when received
in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this
communication has been maintained nor that the communication is free of errors, virus, interception
or interference.

Mime
View raw message