hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bogdan Raducanu <lrd...@gmail.com>
Subject Not able to place enough replicas
Date Mon, 14 Jul 2014 18:07:16 GMT
I'm getting this error while writing many files.
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not
able to place enough replicas, still in need of 4 to reach 4

I've set logging to DEBUG but still there is no reason printed. There
should've been a reason after this line but instead there's just an empty
line.
Has anyone seen something like this before? It is seen on a 4 node cluster
running hadoop 2.2


org.apache.hadoop.hdfs.StateChange: *DIR* NameNode.create: file /file_1002
for DFSClient_NONMAPREDUCE_839626346_1 at 192.168.180.1
org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.startFile:
src=/file_1002, holder=DFSClient_NONMAPREDUCE_839626346_1,
clientMachine=192.168.180.1, createParent=true, replication=4,
createFlag=[CREATE, OVERWRITE]
org.apache.hadoop.hdfs.StateChange: DIR* addFile: /file_1002 is added
org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.startFile: add
/file_1002 to namespace for DFSClient_NONMAPREDUCE_839
<< ... many other operations ... >>
8 seconds later:
org.apache.hadoop.hdfs.StateChange: *BLOCK* NameNode.addBlock: file
/file_1002 fileId=189252 for DFSClient_NONMAPREDUCE_839626346_1
org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getAdditionalBlock:
file /file_1002 for DFSClient_NONMAPREDUCE_839626346_1
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not
able to place enough replicas, still in need of 4 to reach 4
<< EMPTY LINE >>
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:test (auth:SIMPLE) cause:java.io.IOException: File /file_1002 could only
be replicated to 0 nodes instead of minReplication (=1).  There are 4
datanode(s) running and no node(s) are excluded in this operation.
org.apache.hadoop.ipc.Server: IPC Server handler 9 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from
192.168.180.1:49592 Call#1321 Retry#0: error: java.io.IOException: File
/file_1002 could only be replicated to 0 nodes instead of minReplication
(=1).  There are 4 datanode(s) running and no node(s) are excluded in this
operation.
java.io.IOException: File /file_1002 could only be replicated to 0 nodes
instead of minReplication (=1).  There are 4 datanode(s) running and no
node(s) are excluded in this operation.
        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042):0

Mime
View raw message