hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thon, Ingo" <ingo.t...@siemens.com>
Subject AW: multihoming cluster
Date Wed, 21 Jan 2015 11:46:15 GMT

which one is the slave configuration file?
You mean /etc/hadoop/conf/slaves?
This one contains the hostnames.
I also checked the configuration option you pointed to in your previous emails links.
Everything is set there accordingly.


Von: Arpit Agarwal [mailto:aagarwal@hortonworks.com]
Gesendet: Mittwoch, 21. Januar 2015 05:22
An: user@hadoop.apache.org
Betreff: Re: multihoming cluster

Also the log message you pointed out is somewhat misleading. The actual connection attempt
will respect dfs.client.use.datanode.hostname.

In createSocketForPipeline:
  static Socket createSocketForPipeline(final DatanodeInfo first,
      final int length, final DFSClient client) throws IOException {
    final String dnAddr = first.getXferAddr(
    if (DFSClient.LOG.isDebugEnabled()) {
      DFSClient.LOG.debug("Connecting to datanode " + dnAddr);
    final InetSocketAddress isa = NetUtils.createSocketAddr(dnAddr);

The useful log message is this one:
15/01/19 13:51:11 DEBUG hdfs.DFSClient: Connecting to datanode 10.x.x.13:50010

A quick guess is that the slaves configuration file on your NN has 10.x IP addresses instead
of hostnames.

On Tue, Jan 20, 2015 at 7:49 PM, Arpit Agarwal <aagarwal@hortonworks.com<mailto:aagarwal@hortonworks.com>>
Hi Ingo,

HDFS requires some extra configuration for multihoming. These settings are documented at:

I am not sure all these settings were supported prior to Apache Hadoop 2.4. I recommend using
2.6 if you can.


On Mon, Jan 19, 2015 at 11:56 PM, Thon, Ingo <ingo.thon@siemens.com<mailto:ingo.thon@siemens.com>>

Dear List,

I’m using Hadoop in a Multi-homed environment. Basically the Namenode, and Datanodes are
connected via a special network for datatransfer 10.xxx.xxx.xxx.
I installed the Hadoop tool on a computer which can access the nodes in the hadoop cluster
via a second network 192.168.xxx.xxx .
I want to use this computer to copy data into HDFS. However, all operations which try to copy
data directly onto the datanodes are failing.
Basically I can do ls, mkdir and even copy empty files, however, commands like:
hadoop fs -put d:/temp/* hdfs://192.168.<namenode>/user/<me>/to_load/
are failing.
As you can see in the hadoop tool output below the client is trying to access the datanodes
via the IP addresses from the datatransfer network and not via the public second network.
The strange thing in the configuration files on the namenode the parameter dfs.client.use.datanode.hostname
is set to true. From my untestanding I, therefore, shouldn’t see the logline
15/01/19 13:51:11 DEBUG hdfs.DFSClient: pipeline = 10.x.x.13:50010
At all

thanks in advance,
Ingo Thon

Output from hadoop command
15/01/19 13:51:11 DEBUG ipc.Client: IPC Client (7749777) connection to /192.168.xxx.xxx:8020
from me sending #12
15/01/19 13:51:11 DEBUG ipc.Client: IPC Client (7749777) connection to /192.168.xxx.xxx:8020
from thon_i got value #12
15/01/19 13:51:11 DEBUG ipc.ProtobufRpcEngine: Call: addBlock took 0ms
15/01/19 13:51:11 DEBUG hdfs.DFSClient: pipeline = 10.x.x.13:50010
15/01/19 13:51:11 DEBUG hdfs.DFSClient: Connecting to datanode 10.x.x.13:50010
15/01/19 13:51:21 DEBUG ipc.Client: IPC Client (7749777) connection to /192.168.xxx.xxx:8020
from thon_i: closed
15/01/19 13:51:21 DEBUG ipc.Client: IPC Client (7749777) connection to /192.168.xxx.xxx:8020
from thon_i: stopped, remaining connections 0
15/01/19 13:51:32 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.ConnectException: Connection timed out: no further information
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526)
15/01/19 13:51:32 INFO hdfs.DFSClient: Abandoning BP-20yyyyyyy26-10.x.x.x-1415yyyyy790:blk_1074387723_646941
15/01/19 13:51:32 DEBUG ipc.Client: The ping interval is 60000 ms.

NOTICE: This message is intended for the use of the individual or entity to which it is addressed
and may contain information that is confidential, privileged and exempt from disclosure under
applicable law. If the reader of this message is not the intended recipient, you are hereby
notified that any printing, copying, dissemination, distribution, disclosure or forwarding
of this communication is strictly prohibited. If you have received this communication in error,
please contact the sender immediately and delete it from your system. Thank You.
View raw message