hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jingguo yao <yaojing...@gmail.com>
Subject Re: distcp fails with ConnectException
Date Tue, 07 Dec 2010 01:52:13 GMT
A few things worth of a check.

1. Can  hadoop-A connect to hadoop-B usng 8020 port? You can check firewall
settings. Maybe your firewall settings allow ssh and ping. But port 8020 is
disallowed.
2. Which user are you using to run distcp? Does this user has access rights
to hadoop-A?

On Tue, Dec 7, 2010 at 1:50 AM, Deepika Khera <Deepika.Khera@avg.com> wrote:

> I am trying to run distcp between 2  hdfs clusters and I am getting a
> ConnectException .
>
>
>
> The Command I am trying to run is of the form (considering clusters
> hadoop-A & hadoop-B):
>
>
>
> ./hadoop distcp -update hdfs://hadoop-A:8020/dira/part-r-00000
> hdfs://hadoop-B:8020/dirb/
>
>
>
> and I am running it on the destination cluster (hadoop-B).
>
>
>
> The stacktrace for the exception is :
>
>
>
> Copy failed: java.net.ConnectException: Call to hadoop-A/10.0.173.11:8020failed on connection
exception: java.net.ConnectException: Connection
> refused
>
>         at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:743)
>
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>
>         at $Proxy0.getProtocolVersion(Unknown Source)
>
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>
>         at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:113)
>
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:214)
>
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>
>
>
>
>
> I have tried ping and ssh in between these clusters and it is working fine.
> On both clusters namenode is running as the same user.
>
>
>
> The strange part is if I try the command on a single cluster (both source &
> destination are on same DFS-  so its a simple copy), it still fails with the
> same exception. So I mean I run the command below on cluster A.
>
>
>
> ./hadoop distcp -update hdfs://hadoop-A:8020/dir1/part-r-00000
> hdfs://hadoop-A:8020/dir2/
>
>
>
> Is there anything else I need to have to get the distcp working?  Any hints
> on what I could check will be helpful.
>
>
>
> Thanks,
>
> Deepika
>
>
>



-- 
Jingguo

Mime
View raw message