Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EA2C717A57 for ; Mon, 4 May 2015 05:53:28 +0000 (UTC) Received: (qmail 68348 invoked by uid 500); 4 May 2015 05:53:23 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 68205 invoked by uid 500); 4 May 2015 05:53:23 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 68195 invoked by uid 99); 4 May 2015 05:53:23 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 May 2015 05:53:23 +0000 X-ASF-Spam-Status: No, hits=3.9 required=5.0 tests=FORGED_YAHOO_RCVD,HTML_MESSAGE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: error (athena.apache.org: encountered temporary error during SPF processing of domain of nt_mahmood@yahoo.com) Received: from [54.191.145.13] (HELO mx1-us-west.apache.org) (54.191.145.13) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 May 2015 05:53:17 +0000 Received: from nm6.bullet.mail.bf1.yahoo.com (nm6.bullet.mail.bf1.yahoo.com [98.139.212.165]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id D44A024BCA for ; Mon, 4 May 2015 05:52:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1430718750; bh=fxFezTt+1fTqNG+vTItXz6R7qv+T6wJYm95IBn5Lomk=; h=Date:From:Reply-To:To:Subject:From:Subject; b=sTuhpsRKgefB7CMMhfd5up8OIHvn9P5wyvlzbj9v5pLg8/PnZ4+Ax/2AGykmyMCe7bqHM3gpgy5twObeHE1d/1jvlZm5rgzNMi4uDU+kCfpp2iYUR+DdnAjLUMmIo0TPq9oohhSsP50aMe9EEOKDXD6FCXYlfujvPt9oTQehc0egEvNM1xK9yatiVUL1tsPreTqCziC3GD1s9EoPUr82oKSxYf4ucceSIV12lShOWoa23mgJnbEzCZSgxImHDfsXZr0fM22BjyrSk3gm4dYM7SdwjVp65iS3Z68nfxURnSqrt5NwpKCCCCqo2uFvhcKCJ37IXnqk/YPaDwJj+f4cew== Received: from [98.139.170.182] by nm6.bullet.mail.bf1.yahoo.com with NNFMP; 04 May 2015 05:52:30 -0000 Received: from [98.139.215.228] by tm25.bullet.mail.bf1.yahoo.com with NNFMP; 04 May 2015 05:52:30 -0000 Received: from [127.0.0.1] by omp1068.mail.bf1.yahoo.com with NNFMP; 04 May 2015 05:52:30 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 797055.39443.bm@omp1068.mail.bf1.yahoo.com X-YMail-OSG: zooA3iYVM1nyk3MUBLMGPVW9erRP4skhdI6ssJiT0GJuGhgMumFtMOflQlFV38G rbPmPLgFLbPWjlGnTByYiTNSR0eLa4FP9PuanHEfsOzDpCt1aB1G90lAXx9BzGn9nzZ1kwRhmdOD HuqyI3h0PEs3W_eaNsL31gMto8ZJqk1Kv54lbTSc.lLtdpGd_u0tRlbJ4EI9fXN9t1kPGp447aNF Kf9YkZJK878oRW0JGXgXeFqn0i5bd_paGAsWpN5sz8OesRnZ6_dWGTVbRd6o7tsZZm_pauHBVtH3 g4822ls6VAo4MFW3KklgcnL71kSbcW5HW01Z00FMSBAgF1TtzMJ5_ZhwnWPApX3Uyqx_Q_k.d2EO UF8Z491g2g3ZTg6.VdXQnvbwLfenbeqNLDy.vOjA.HAf3LGX4K9Q3.ltpzkesvZyQ2ACIY5MtQ6m 6dSoS_m4S3q4YE0KjCDf7TyHafnlG7a1NBGPvQ4UQgnYyIa272cDOOUpJlsGGXu2m6djSMGrw9rn 1ReDKsWgIZA-- Received: by 76.13.26.142; Mon, 04 May 2015 05:52:30 +0000 Date: Mon, 4 May 2015 05:52:29 +0000 (UTC) From: Mahmood Naderan Reply-To: Mahmood Naderan To: User Hadoop Message-ID: <547287788.1046137.1430718749709.JavaMail.yahoo@mail.yahoo.com> Subject: Connection issues MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_1046136_1159185431.1430718749704" X-Virus-Checked: Checked by ClamAV on apache.org ------=_Part_1046136_1159185431.1430718749704 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Dear all,My problem with "ipc.Client: Retrying connect to server" is still = open! To start a new and clean thread, here is the problem description. [mahmood@tiger Index]$ which hadoop ~/bigdatabench/apache/hadoop-1.0.2/bin/hadoop[mahmood@tiger Index]$ cat /et= c/hosts 127.0.0.1=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 localhost.localdomain localho= st 192.168.1.5=C2=A0=C2=A0=C2=A0 tiger 192.168.1.100=C2=A0 orca 192.168.1.6=C2=A0=C2=A0=C2=A0 zardalou=C2=A0=20 [mahmood@tiger Index]$ hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.= 0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated. 15/05/04 10:14:07 INFO ipc.Client: Retrying connect to server: localhost.lo= caldomain/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpT= oMaximumCountWithFixedSleep(maxRetries=3D10, sleepTime=3D1 SECONDS) Exception in thread "main" java.net.ConnectException: Call to localhost.loc= aldomain/127.0.0.1:9000 failed on connection exception: java.net.ConnectExc= eption: Connection refused =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client.= wrapException(Client.java:1142) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client.= call(Client.java:1118) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.RPC$Inv= oker.invoke(RPC.java:229) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at com.sun.proxy.$Proxy1.getProt= ocolVersion(Unknown Source) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.reflect.NativeMethodAcces= sorImpl.invoke0(Native Method) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.reflect.NativeMethodAcces= sorImpl.invoke(NativeMethodAccessorImpl.java:57) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.reflect.DelegatingMethodA= ccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.lang.reflect.Method.invo= ke(Method.java:606) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.io.retry.Re= tryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.io.retry.Re= tryInvocationHandler.invoke(RetryInvocationHandler.java:62) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at com.sun.proxy.$Proxy1.getProt= ocolVersion(Unknown Source) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.RPC.che= ckVersion(RPC.java:422) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DFSCli= ent.createNamenode(DFSClient.java:183) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DFSCli= ent.(DFSClient.java:281) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DFSCli= ent.(DFSClient.java:245) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.Distri= butedFileSystem.initialize(DistributedFileSystem.java:100) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.fs.FileSyst= em.createFileSystem(FileSystem.java:1446) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.fs.FileSyst= em.access$200(FileSystem.java:67) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.fs.FileSyst= em$Cache.get(FileSystem.java:1464) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.fs.FileSyst= em.get(FileSystem.java:263) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at IndexHDFS.indexData(IndexHDFS= .java:88) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at IndexHDFS.main(IndexHDFS.java= :72) Caused by: java.net.ConnectException: Connection refused =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.nio.ch.SocketChannelImpl.= checkConnect(Native Method) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.nio.ch.SocketChannelImpl.= finishConnect(SocketChannelImpl.java:739) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.net.SocketI= OWithTimeout.connect(SocketIOWithTimeout.java:206) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.net.NetUtil= s.connect(NetUtils.java:511) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.net.NetUtil= s.connect(NetUtils.java:481) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client$= Connection.setupConnection(Client.java:457) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client$= Connection.setupIOstreams(Client.java:583) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client$= Connection.access$2200(Client.java:205) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client.= getConnection(Client.java:1249) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Client.= call(Client.java:1093) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ... 20 more As you can see there are many "connection refused" errors. So you may sugge= st to check the firewall and network configs to make sure port #9000 is ope= n. I found a good test method here, http://goo.gl/ZYjoSy=20 This is a simple socket test program where a client send a text to the serv= er via a port number. I have to say that I ran the program with port #9000 = and **it did work successfully** So,I am sure that there is no problem with the network configs. Is there any idea on the hdfs://127.0.0.1:9000 ?? =C2=A0Regards, Mahmood ------=_Part_1046136_1159185431.1430718749704 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Dear all,
My problem with "ipc.Client: Retrying connect to server" is still open= ! To start a new and clean thread, here is the problem description.<= /div>

[mahmood@tiger Index]= $ which hadoop
~/bigdatabench/apache/hadoop-1.0.2/= bin/hadoop
[m= ahmood@tiger Index]$ cat /etc/hosts
127.0.0.1 = ;      localhost.localdomain localhost
192.168.1.5    tiger
= 192.168.1.100  orca
192.168.1.6  &n= bsp; zardalou 


[mahmood@tiger Index]$ hadoop -jar indexdata.jar `pwd`/result hdfs://127.0= .0.1:9000/data-Index
Warning: $HADOOP_HOME is depr= ecated.
15/05/04 10:14:07 INFO ipc.Client: Retryin= g connect to server: localhost.localdomain/127.0.0.1:9000. Already tried 9 = time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3D1= 0, sleepTime=3D1 SECONDS)
Exception in thread "mai= n" java.net.ConnectException: Call to localhost.localdomain/127.0.0.1:9000 = failed on connection exception: java.net.ConnectException: Connection refus= ed
        at o= rg.apache.hadoop.ipc.Client.wrapException(Client.java:1142)
        at org.apache.hadoop.i= pc.Client.call(Client.java:1118)
   = ;     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.j= ava:229)
       = ; at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
        at sun.reflect.Nativ= eMethodAccessorImpl.invoke0(Native Method)
 &= nbsp;      at sun.reflect.NativeMethodAccessorImpl= .invoke(NativeMethodAccessorImpl.java:57)
 &n= bsp;      at sun.reflect.DelegatingMethodAccessorI= mpl.invoke(DelegatingMethodAccessorImpl.java:43)
&= nbsp;       at java.lang.reflect.Method.invok= e(Method.java:606)
     &= nbsp;  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMeth= od(RetryInvocationHandler.java:85)
  &nb= sp;     at org.apache.hadoop.io.retry.RetryInvocationHa= ndler.invoke(RetryInvocationHandler.java:62)
 = ;       at com.sun.proxy.$Proxy1.getProtocolV= ersion(Unknown Source)
    &nb= sp;   at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
        at org.apa= che.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
        at org.apache.hadoop.= hdfs.DFSClient.<init>(DFSClient.java:281)
&n= bsp;       at org.apache.hadoop.hdfs.DFSClien= t.<init>(DFSClient.java:245)
  &nb= sp;     at org.apache.hadoop.hdfs.DistributedFileSystem= .initialize(DistributedFileSystem.java:100)
 =        at org.apache.hadoop.fs.FileSystem.cre= ateFileSystem(FileSystem.java:1446)
  &n= bsp;     at org.apache.hadoop.fs.FileSystem.access$200(= FileSystem.java:67)
     =    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1= 464)
        at= org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
        at IndexHDFS.indexData(I= ndexHDFS.java:88)
     &n= bsp;  at IndexHDFS.main(IndexHDFS.java:72)
Ca= used by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChanne= lImpl.checkConnect(Native Method)
  &nbs= p;     at sun.nio.ch.SocketChannelImpl.finishConnect(So= cketChannelImpl.java:739)
    =     at org.apache.hadoop.net.SocketIOWithTimeout.connect(Soc= ketIOWithTimeout.java:206)
    = ;    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java= :511)
        a= t org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
        at org.apache.hadoop.i= pc.Client$Connection.setupConnection(Client.java:457)
        at org.apache.hadoop.ipc.C= lient$Connection.setupIOstreams(Client.java:583)
&= nbsp;       at org.apache.hadoop.ipc.Client$C= onnection.access$2200(Client.java:205)
  = ;      at org.apache.hadoop.ipc.Client.getConnecti= on(Client.java:1249)
     = ;   at org.apache.hadoop.ipc.Client.call(Client.java:1093)
        ... 20 more







As you ca= n see there are many "connection refused" errors. So you may suggest to che= ck the firewall and network configs to make sure port #9000 is open. I foun= d a good test method here, http://goo.gl/ZYjoSy
This is a simple socket test program where a client se= nd a text to the server via a port number. I have to say that I ran the pro= gram with port #9000 and **it did work successfully**

So,I am sure that there is no pro= blem with the network configs.

Is there any idea on the hdfs://127.0.0.1:9000

??
 
Regards,
Mahmood
------=_Part_1046136_1159185431.1430718749704--