Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 292EFC10C for ; Fri, 27 Apr 2012 05:18:27 +0000 (UTC) Received: (qmail 25227 invoked by uid 500); 27 Apr 2012 05:18:26 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 24883 invoked by uid 500); 27 Apr 2012 05:18:26 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 24452 invoked by uid 99); 27 Apr 2012 05:18:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Apr 2012 05:18:26 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Apr 2012 05:18:22 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 15351422570 for ; Fri, 27 Apr 2012 05:18:01 +0000 (UTC) Date: Fri, 27 Apr 2012 05:18:01 +0000 (UTC) From: "liaowenrui (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <1146324269.1475.1335503881108.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Created] (HDFS-3333) java.io.IOException: File /user/root/lwr/test31.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 liaowenrui created HDFS-3333: -------------------------------- Summary: java.io.IOException: File /user/root/lwr/test31.txt c= ould only be replicated to 0 nodes instead of minReplication (=3D1). There= are 3 datanode(s) running and 3 node(s) are excluded in this operation. Key: HDFS-3333 URL: https://issues.apache.org/jira/browse/HDFS-3333 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.23.1, 2.0.0 Environment: namenode=EF=BC=9A1 (IP:10.18.40.154) datanode=EF=BC=9A3 (IP:10.18.40.154,10.18.40.102,10.18.52.55) HOST-10-18-40-154:/home/APril20/install/hadoop/namenode/bin # ./hadoop dfsa= dmin -report DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Configured Capacity: 129238446080 (120.36 GB) Present Capacity: 51742765056 (48.19 GB) DFS Remaining: 49548591104 (46.15 GB) DFS Used: 2194173952 (2.04 GB) DFS Used%: 4.24% Under replicated blocks: 14831 Blocks with corrupt replicas: 1 Missing blocks: 100 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Live datanodes: Name: 10.18.40.102:50010 (10.18.40.102) Hostname: linux.site Decommission Status : Normal Configured Capacity: 22765834240 (21.2 GB) DFS Used: 634748928 (605.34 MB) Non DFS Used: 1762299904 (1.64 GB) DFS Remaining: 20368785408 (18.97 GB) DFS Used%: 2.79% DFS Remaining%: 89.47% Last contact: Fri Apr 27 10:35:57 IST 2012 Name: 10.18.40.154:50010 (HOST-10-18-40-154) Hostname: HOST-10-18-40-154 Decommission Status : Normal Configured Capacity: 23259897856 (21.66 GB) DFS Used: 812396544 (774.76 MB) Non DFS Used: 8297279488 (7.73 GB) DFS Remaining: 14150221824 (13.18 GB) DFS Used%: 3.49% DFS Remaining%: 60.84% Last contact: Fri Apr 27 10:35:58 IST 2012 Name: 10.18.52.55:50010 (10.18.52.55) Hostname: HOST-10-18-52-55 Decommission Status : Normal Configured Capacity: 83212713984 (77.5 GB) DFS Used: 747028480 (712.42 MB) Non DFS Used: 67436101632 (62.8 GB) DFS Remaining: 15029583872 (14 GB) DFS Used%: 0.9% DFS Remaining%: 18.06% Last contact: Fri Apr 27 10:35:58 IST 2012 Reporter: liaowenrui log4j:WARN No appenders could be found for logger (org.apache.hadoop.metric= s2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. java.io.IOException: File /user/root/lwr/test31.txt could only be replicate= d to 0 nodes instead of minReplication (=3D1). There are 3 datanode(s) run= ning and 3 node(s) are excluded in this operation. =09at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarg= et(BlockManager.java:1259) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBloc= k(FSNamesystem.java:1916) =09at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(Nam= eNodeRpcServer.java:472) =09at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra= nslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292) =09at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl= ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java= :42602) =09at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal= l(ProtobufRpcEngine.java:428) =09at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:905) =09at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688) =09at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1684) =09at java.security.AccessController.doPrivileged(Native Method) =09at javax.security.auth.Subject.doAs(Subject.java:396) =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma= tion.java:1205) =09at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1682) i:4284 =09at org.apache.hadoop.ipc.Client.call(Client.java:1159) =09at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEng= ine.java:185) =09at $Proxy9.addBlock(Unknown Source) =09at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces= sorImpl.java:25) =09at java.lang.reflect.Method.invoke(Method.java:597) =09at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI= nvocationHandler.java:165) =09at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat= ionHandler.java:84) =09at $Proxy9.addBlock(Unknown Source) =09at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.= addBlock(ClientNamenodeProtocolTranslatorPB.java:295) =09at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBl= ock(DFSOutputStream.java:1097) =09at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputSt= ream(DFSOutputStream.java:973) =09at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStre= am.java:455) testcase: import java.io.IOException; import java.net.URI; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.DistributedFileSystem; public class Write1 { =09/** =09 * @param args =09 * @throws Exception=20 =09 */ =09public static void main(String[] args) throws Exception { =09=09//System.out.println("main"); =09=09String hdfsFile=3D"/user/root/lwr/test31.txt"; =09byte writeBuff[] =3D new byte [1024 * 1024]; =09int i=3D0; =09DistributedFileSystem dfs =3D new DistributedFileSystem(); =09Configuration conf=3Dnew Configuration(); =09//conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 512); =09//conf.setLong(DFSConfigKeys.DFS_REPLICATION_KEY, 2); // conf.setInt("dfs.replication", 3); conf.setLong("dfs.blocksize", 512); =09dfs.initialize(URI.create("hdfs://10.18.40.154:9000"), conf); =09//dfs.delete(new Path(hdfsFile)); =09 //appendFile(dfs,hdfsFile,1024 * 1024,true); =09try =09{ =09FSDataOutputStream out1=3Ddfs.create(new Path(hdfsFile)); =09 =09 for(i=3D0;i<100000;i++) =09 { =09 out1.write(writeBuff, 0, 512); =09} out1.hsync(); out1.close(); /* =09 FSDataOutputStream out=3Ddfs.append(new Path(hdfsFile),4096); =09=09out.write(writeBuff, 0, 512 * 1024); =09=09out.hsync(); =09=09out.close(); =09=09*/ =09}catch (IOException e) =09{ =09 =20 =09 System.out.println("i:" + i); =09 e.printStackTrace(); =09} =09finally =09{ =20 =09=09 =09=09System.out.println("i:" + i); =09 System.out.println("end!"); =09=20 =09} } =09 } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrato= rs: https://issues.apache.org/jira/secure/ContactAdministrators!default.jsp= a For more information on JIRA, see: http://www.atlassian.com/software/jira