share your files in hadoop home folder
 
hadoop-1.0.3/conf/mapred-site.xml
hadoop-1.0.3/conf/core-site.xml
hadoop-1.0.3/conf/hdfs-site.xml
 
 
and also run "jps" command to which processes are running


 
On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <yhemanth@thoughtworks.com> wrote:
Hi,

This line in your exception message:
"Exception in thread "main" java.io.IOException: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.IOException: Connection reset by peer"

indicates that the client is trying to submit a job on the IPC port of the jobtracker at 127.0.0.1:54311. Can you tell what is configured for mapred.job.tracker (most likely in your mapred-site.xml)


On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <mallik.cloud@gmail.com> wrote:
i have not configured, can u tell me how to configure


On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <yhemanth@thoughtworks.com> wrote:
Have you configured your JobTracker's IPC port as 54311. Sharing your configuration may be helpful.

Thanks
Hemanth


On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <mallik.cloud@gmail.com> wrote:
i have seen the logs and the reason for the error is 
13/03/10 10:26:45 ERROR security.UserGroupInformation: PriviledgedActionException as:mallik cause:java.io.IOException: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.IOException: Connection reset by peer
Exception in thread "main" java.io.IOException: Call to localhost/127.0.0.1:54311 failed on local exception: java.io.IOException: Connection reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at MaxTemperature.main(MaxTemperature.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:191)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)


On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <mallik.cloud@gmail.com> wrote:
both name node and job tracker are working well



On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <jagatsingh@gmail.com> wrote:
What is coming on

localhost:50070
localhost:50030

Are you able to see console pages?




On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <mallik.cloud@gmail.com> wrote:
i am not able to run that command and logs are empty


On Sun, Mar 10, 2013 at 8:56 AM, feng lu <amuseme.lu@gmail.com> wrote:
Hi

Are you able to run the wordcount example in hadoop-*-examples.jar using this command.

bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r <#reducers>] <in-dir> <out-dir>

check your JobTracker and TaskTracker is start correctly. see the logs.


On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <mallik.cloud@gmail.com> wrote:
it is the not the problem of MaxTemperature.jar,even the command of any >hadoop jar  xxx.jar  input output 

when i run the command , it is like Inline image 1


On Sun, Mar 10, 2013 at 8:03 AM, feng lu <amuseme.lu@gmail.com> wrote:
Hi mallik

Do you submit the job to JobTrackter? like this code JobClient.runJob(conf) in your MaxTemperature.jar package. 

maybe you can refer to this tutorial. [0]

[0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html


On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <mallik.cloud@gmail.com> wrote:
hai guys i am using hadoop version 1.0.3 , it was ran well before. even now  if use >hadoop fs -ls these commands well but when i use the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap 

the cluster is not processing the job, what might be the problem, please help me, when i see logs,nothing in the logs. please help me it is very urget. 

thanks in advance.



--
Don't Grow Old, Grow Up... :-)




--
Don't Grow Old, Grow Up... :-)










--


Thanx and Regards
 Vikas Jadhav