hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From igor Finkelshteyn <iefin...@gmail.com>
Subject Hadoop on EC2 Managing Internal/External IPs
Date Thu, 23 Aug 2012 19:34:44 GMT
Hi,
I'm currently setting up a Hadoop cluster on EC2, and everything works just fine when accessing
the cluster from inside EC2, but as soon as I try to do something like upload a file from
an external client, I get timeout errors like:

12/08/23 12:06:16 ERROR hdfs.DFSClient: Failed to close file /user/some_file._COPYING_
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready
for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.123.x.x:50010]

What's clearly happening is my NameNode is resolving my DataNode's IPs to their internal EC2
values instead of their external values, and then sending along the internal IP to my external
client, which is obviously unable to reach those. I'm thinking this must be a common problem.
How do other people deal with it? Is there a way to just force my name node to send along
my DataNode's hostname instead of IP, so that the hostname can be resolved properly from whatever
box will be sending files?

Eli
Mime
View raw message