Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F2EFA178C5 for ; Wed, 1 Apr 2015 06:50:29 +0000 (UTC) Received: (qmail 59077 invoked by uid 500); 1 Apr 2015 06:50:29 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 58979 invoked by uid 500); 1 Apr 2015 06:50:29 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 58967 invoked by uid 99); 1 Apr 2015 06:50:29 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Apr 2015 06:50:29 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ajith.shetty@huawei.com designates 58.251.152.64 as permitted sender) Received: from [58.251.152.64] (HELO szxga01-in.huawei.com) (58.251.152.64) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Apr 2015 06:50:23 +0000 Received: from 172.24.2.119 (EHLO SZXEMI402-HUB.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CLO48337; Wed, 01 Apr 2015 14:43:41 +0800 (CST) Received: from SZXEMI501-MBX.china.huawei.com ([169.254.1.105]) by SZXEMI402-HUB.china.huawei.com ([10.83.65.54]) with mapi id 14.03.0158.001; Wed, 1 Apr 2015 14:43:36 +0800 From: Ajith shetty To: "hdfs-dev@hadoop.apache.org" Subject: [Federation setup] Adding a new name node to federated cluster Thread-Topic: [Federation setup] Adding a new name node to federated cluster Thread-Index: AdBsRwTuq6n/3UOiROCWoQQVM3t6JQ== Date: Wed, 1 Apr 2015 06:43:36 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.18.146.180] Content-Type: multipart/alternative; boundary="_000_AD390DDB0E619642BAA2F6CFF77D0F1B990B4D8DSZXEMI501MBXchi_" MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Virus-Checked: Checked by ClamAV on apache.org --_000_AD390DDB0E619642BAA2F6CFF77D0F1B990B4D8DSZXEMI501MBXchi_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi all, Use case : I am trying to add a new name node to already running federated = cluster https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/Feder= ation.html#Adding_a_new_Namenode_to_an_existing_HDFS_cluster When I execute the refreshNamenodes($HADOOP_PREFIX/bin/hdfs dfsadmin -refre= shNamenodes :) step, I am getting ex= ception : HOST-10-19-92-85 is my new namenode to be added into cluster host-10-19-92-100 is the datanode in federated cluster At new name node : java.io.EOFException: End of File Exception between local host is: "HOST-10= -19-92-85/10.19.92.85"; destination host is: "host-10-19-92-100":50010; : j= ava.io.EOFException; For more details see: http://wiki.apache.org/hadoop/E= OFException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Me= thod) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeCons= tructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Delega= tingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792= ) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1407) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufR= pcEngine.java:229) at com.sun.proxy.$Proxy13.refreshNamenodes(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslat= orPB.refreshNamenodes(ClientDatanodeProtocolTranslatorPB.java:195) at org.apache.hadoop.hdfs.tools.DFSAdmin.refreshNamenodes(DFSAdmin.= java:1919) at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1825) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:1959) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Clien= t.java:1079) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:974) At data node trying to refresh: 2015-04-01 16:11:31,720 ERROR org.apache.hadoop.hdfs.server.datanode.DataNo= de: host-10-19-92-100:50010:DataXceiver error processing unknown operation = src: /10.19.92.85:43802 dst: /10.19.92.100:50010 java.io.IOException: Version Mismatch (Expected: 28, Received: 26738 ) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.re= adOp(Receiver.java:60) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(D= ataXceiver.java:225) at java.lang.Thread.run(Thread.java:745) I have cross verified the installation and jar versions. So the refreshNamenodes command is not working in my set up but as a workar= ound I found that restarting the datanode will add it to the newly added na= menode Please help me out Regards Ajith --_000_AD390DDB0E619642BAA2F6CFF77D0F1B990B4D8DSZXEMI501MBXchi_--