Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E1EA110E4C for ; Thu, 9 Jan 2014 11:27:40 +0000 (UTC) Received: (qmail 16665 invoked by uid 500); 9 Jan 2014 11:27:11 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 16442 invoked by uid 500); 9 Jan 2014 11:27:06 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 16104 invoked by uid 99); 9 Jan 2014 11:27:02 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Jan 2014 11:27:02 +0000 X-ASF-Spam-Status: No, hits=2.8 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS,URIBL_BLACK X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of harsh@cloudera.com designates 209.85.213.171 as permitted sender) Received: from [209.85.213.171] (HELO mail-ig0-f171.google.com) (209.85.213.171) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Jan 2014 11:26:56 +0000 Received: by mail-ig0-f171.google.com with SMTP id c10so15797760igq.4 for ; Thu, 09 Jan 2014 03:26:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=kn9+aHgp3eL0chbHwuGyf7ISpEiIHBRFXN5jLrFHV1g=; b=fik48JilBYhhQe02cOrPpSzLPk1giqYPODzFB/QaZP+jvxYIjtahG2k0s2DNelAt0H Kh+ADrCcE0XV/025Scvm3MKKDDTwaXNnGWQtGvo8zUqWTmo0nXFR1ByfWFNLbja33LNp MF1UXae4sjSjl3scruRyIoIO8F/NX9R2pv558hU13K7HjbIA2daPBcfrYlPD7UHVIpsO GdCID7ILP8YKoo+2tjvKIyKsYrT2p6cgmb3CuZzweTiTtcQJGfJVCjYYX5rHuBkzT2EC NQlFDNNUzMkpDggpPS/UgDHG14Bx5ltCX3btSAWHZ0HlI4nl19BoiF0Sn6Af2CCTtd7q GA7g== X-Gm-Message-State: ALoCoQkNcXkUD48K7UUOKWO7NT5DAyFZC94LzpwwK3ELNL7XGy+mHDTsVJb4O0WGfCsYEej2fEAq X-Received: by 10.50.102.99 with SMTP id fn3mr37131990igb.5.1389266795635; Thu, 09 Jan 2014 03:26:35 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.234.225 with HTTP; Thu, 9 Jan 2014 03:26:14 -0800 (PST) In-Reply-To: <52CE7B71.6020808@gmail.com> References: <52CE7B71.6020808@gmail.com> From: Harsh J Date: Thu, 9 Jan 2014 16:56:14 +0530 Message-ID: Subject: Re: org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeExceptio To: "" Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org Your hdfs-site.xml on the NN defines an "includes" file, but the includes file does not list this connecting DN's proper hostname/IP, causing the NN to reject it when it tries to ask itself to be registered upon startup. On Thu, Jan 9, 2014 at 4:05 PM, Pedro Sa da Costa wrote: > > When I try to launch the namenode and the datanode in MRv2, the datanode > can't connect to the namenode, giving me the error below. I also put the > core-site file that I use below. > > The Firewall in the hosts is disabled. I don't have excluded nodes defined. > Why the datanodes can't connect to the namenode? Any help to solve this > problem? > > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): > Datanode denied communication with namenode: DatanodeRegistrati > on(0.0.0.0, storageID=DS-1449645935-172.16.1.10-50010-1389224474955, > infoPort=50075, ipcPort=50020, > storageInfo=lv=-40;cid=CID-9a8571a3-17ae-49b2-b957-b009e88b9f9a;nsid=9 > 34416283;c=0) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:631) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3398) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881) > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90) > at > org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:416) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1235) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.registerDatanode(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:622) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy9.registerDatanode(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:146) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:623) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:225) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664) > at java.lang.Thread.run(Thread.java:701) > > I set the core-site.xml > > > fs.default.name > hdfs://10.103.0.17:9000 > hadoop.tmp.dir /tmp/hadoop-temp > > > hadoop.proxyuser.root.hosts* > > hadoop.proxyuser.root.groups* > > > -- > Best regards, -- Harsh J