Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DABCE173D3 for ; Fri, 27 Mar 2015 11:07:28 +0000 (UTC) Received: (qmail 92822 invoked by uid 500); 27 Mar 2015 11:07:21 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 92679 invoked by uid 500); 27 Mar 2015 11:07:21 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 92626 invoked by uid 99); 27 Mar 2015 11:07:21 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Mar 2015 11:07:21 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of wget.null@gmail.com designates 74.125.82.43 as permitted sender) Received: from [74.125.82.43] (HELO mail-wg0-f43.google.com) (74.125.82.43) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Mar 2015 11:06:55 +0000 Received: by wgbgs4 with SMTP id gs4so4260025wgb.0 for ; Fri, 27 Mar 2015 04:06:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:message-id:mime-version:subject:date:references :to:in-reply-to; bh=SFmlxoOCdeed9YhouGzBGBskZp6kq+fgNcJmaeaw5Jg=; b=hgplooHKuLMZy/VaEX8HK+F/Z/5uhHXBdDp/l64gV7QdvHM8vPSPKfVAtmqFAapue8 bvODxZEOhJNk9TzEy6CBW+QtcXl3XScappZTb/HU9I/wWtHDiBCVBUD4TnAmeZOFSBrV MQpydFJ/unlG+2zIDmaqT69kNbR9WzjnFzyVt3ZWb3cf0sTtfZPlKa9sfM5xcAEvTFqz mRlRLA7M07S4SieVUVzVTUB8zjjWJ8XGF2SYHkRvAqnrLULbe9modt3H27e1aTruaw1Y XTIFjtLwpc5HEbqQozPmuxyhlAwx2gJAT5kRcxPNh0XmlMjMcScWB+tyYR7q6s6VTn19 BLpg== X-Received: by 10.194.78.231 with SMTP id e7mr35768582wjx.33.1427454414575; Fri, 27 Mar 2015 04:06:54 -0700 (PDT) Received: from [192.168.178.39] (HSI-KBW-091-089-101-114.hsi2.kabelbw.de. [91.89.101.114]) by mx.google.com with ESMTPSA id ax10sm2334626wjc.26.2015.03.27.04.06.53 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 27 Mar 2015 04:06:53 -0700 (PDT) From: Alexander Alten-Lorenz Content-Type: multipart/alternative; boundary="Apple-Mail=_5A326D57-B098-4BE0-A138-E2025703280B" Message-Id: <7D625B53-B836-43B5-88F6-F234DFB1041E@gmail.com> Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2093\)) Subject: Re: Data doesn't write in HDFS Date: Fri, 27 Mar 2015 12:06:53 +0100 References: To: user@hadoop.apache.org In-Reply-To: X-Mailer: Apple Mail (2.2093) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_5A326D57-B098-4BE0-A138-E2025703280B Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=iso-8859-1 Hi=20 Have a closer look at: java.io.IOException: File = /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be = replicated to 0 nodes instead of minReplication (=3D1). There are 1 = datanode(s) running and 1 node(s) are excluded in this operation. BR, AL > On 27 Mar 2015, at 05:48, Ramesh Rocky = wrote: >=20 > Hi, >=20 > I try to write the data in hdfs using flume on windows machine. Here I = configure flume and hadoop on same machine and write data into hdfs its = works perfectly. >=20 > But configure hadoop and flume on different machines (both are windows = machines). I try to write data in hdfs it shows the following error. >=20 > 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups = available for user SYSTEM > 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups = available for user SYSTEM > 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups = available for user SYSTEM > 15/03/27 09:46:36 INFO namenode.FSEditLog: Number of transactions: 2 = Total time for transactions(ms): 28 Number of transactions batched in = Syncs: 0 Number of syncs: 2 SyncTimes(ms): 46 > 15/03/27 09:46:37 WARN security.UserGroupInformation: No groups = available for user SYSTEM > 15/03/27 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: = /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. = BP-412829692-192.168.56.1-1427371070417 > blk_1073741836_1012{blockUCState=3DUNDER_CONSTRUCTION, = primaryNodeIndex=3D-1, = replicas=3D[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf= 871f4f12:NORMAL:192.168.56.1:50010|RBW]]} > 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to = place enough replicas, still in need of 1 to reach 1 = (unavailableStorages=3D[], > storagePolicy=3DBlockStoragePolicy{HOT:7, storageTypes=3D[DISK], = creationFallbacks=3D[], replicationFallbacks=3D[ARCHIVE]}, = newBlock=3Dtrue)=20 > For more information, please enable DEBUG log level on = org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy > 15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place = enough replicas: expected size is 1 but only 0 storage types can be = selected (replication=3D1, > selected=3D[], unavailable=3D[DISK], removed=3D[DISK], = policy=3DBlockStoragePolicy{HOT:7, storageTypes=3D[DISK], = creationFallbacks=3D[], replicationFallbacks=3D[ARCHIVE]}) > 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to = place enough replicas, still in need of 1 to reach 1 = (unavailableStorages=3D[DISK], storage > Policy=3DBlockStoragePolicy{HOT:7, storageTypes=3D[DISK], = creationFallbacks=3D[], replicationFallbacks=3D[ARCHIVE]}, = newBlock=3Dtrue) All required storage types are unava > ilable: unavailableStorages=3D[DISK], = storagePolicy=3DBlockStoragePolicy{HOT:7, storageTypes=3D[DISK], = creationFallbacks=3D[], replicationFallbacks=3D[ARCHIVE]} > 15/03/27 09:46:42 INFO ipc.Server: IPC Server handler 9 on 9000, call = org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from = 192.168.15.242:57416 Call#7 Retry#0 > java.io.IOException: File = /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be = replicated to 0 nodes instead of minReplication (=3D1). There are 1 = datanode(s) running and 1 node(s) are excluded in this operation. > at = org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4Ne= wBlock(BlockManager.java:1549) > at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSN= amesystem.java:3200) > at = org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNode= RpcServer.java:641) > at = org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslat= orPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482) > at = org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientN= amenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at = org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Pro= tobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > 15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no = secret manager running >=20 > Please anybody know about this issue.. > Thanks & Regards > Ramesh --Apple-Mail=_5A326D57-B098-4BE0-A138-E2025703280B Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=iso-8859-1
Hi 

Have a closer look at:

java.io.IOException: File = /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to 0 nodes instead of minReplication = (=3D1).  There are 1 datanode(s) running and 1 node(s) are excluded = in this operation.

BR,
 AL


On = 27 Mar 2015, at 05:48, Ramesh Rocky <rmshkumar362@outlook.com> wrote:

Hi,

I try to write the data in hdfs using = flume on windows machine. Here I configure flume and hadoop on same machine and write data = into hdfs its works perfectly.

But configure hadoop and flume on different machines (both = are windows machines). I try to write data in hdfs it shows the = following error.

15/03/27 09:46:35 WARN = security.UserGroupInformation: No groups available for user = SYSTEM
15/03/27 09:46:35 WARN = security.UserGroupInformation: No groups available for user = SYSTEM
15/03/27 09:46:35 WARN = security.UserGroupInformation: No groups available for user = SYSTEM
15/03/27 09:46:36 INFO namenode.FSEditLog: = Number of transactions: 2 Total time for transactions(ms): 28 Number of = transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): = 46
15/03/27 09:46:37 WARN = security.UserGroupInformation: No groups available for user = SYSTEM
15/03/27 09:46:39 INFO hdfs.StateChange: = BLOCK* allocateBlock: = /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. = BP-412829692-192.168.56.1-1427371070417
 blk_1073741836_1012{blockUCState=3DUNDER_CONSTRUCTION, = primaryNodeIndex=3D-1, = replicas=3D[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf= 871f4f12:NORMAL:192.168.56.1:50010|RBW]]}
15/03/27 = 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place = enough replicas, still in need of 1 to reach 1 = (unavailableStorages=3D[],
storagePolicy=3DBlockStoragePolicy{HOT:7, = storageTypes=3D[DISK], creationFallbacks=3D[], = replicationFallbacks=3D[ARCHIVE]}, newBlock=3Dtrue) 
For more information, please enable DEBUG log level on = org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed = to place enough replicas: expected size is 1 but only 0 storage types = can be selected (replication=3D1,
 selected=3D[]= , unavailable=3D[DISK], removed=3D[DISK], = policy=3DBlockStoragePolicy{HOT:7, storageTypes=3D[DISK], = creationFallbacks=3D[], replicationFallbacks=3D[ARCHIVE]})
15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: = Failed to place enough replicas, still in need of 1 to reach 1 = (unavailableStorages=3D[DISK], storage
Policy=3DBlockStoragePolicy{HOT:7, storageTypes=3D[DISK], = creationFallbacks=3D[], replicationFallbacks=3D[ARCHIVE]}, = newBlock=3Dtrue) All required storage types are unava
ilable:  unavailableStorages=3D[DISK], = storagePolicy=3DBlockStoragePolicy{HOT:7, storageTypes=3D[DISK], = creationFallbacks=3D[], replicationFallbacks=3D[ARCHIVE]}
15/03/27 09:46:42 INFO ipc.Server: IPC Server handler 9 on = 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from = 192.168.15.242:57416 Call#7 Retry#0
java.io.IOException: File = /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be = replicated to 0 nodes instead of minReplication (=3D1).  There are = 1 datanode(s) running and 1 node(s) are excluded in this = operation.
        at = org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4Ne= wBlock(BlockManager.java:1549)
      =   at = org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSN= amesystem.java:3200)
        at = org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNode= RpcServer.java:641)
        at = org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslat= orPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
=
        at = org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientN= amenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at = org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Pro= tobufRpcEngine.java:619)
      =   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at = java.security.AccessController.doPrivileged(Native Method)
        at = javax.security.auth.Subject.doAs(Subject.java:415)
        at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1628)
        at = org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get = DT with no secret manager running

Please anybody know about this = issue..
Thanks & Regards
Ramesh

= --Apple-Mail=_5A326D57-B098-4BE0-A138-E2025703280B--