Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D065F107F1 for ; Thu, 3 Apr 2014 02:28:31 +0000 (UTC) Received: (qmail 61298 invoked by uid 500); 3 Apr 2014 02:28:21 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 61155 invoked by uid 500); 3 Apr 2014 02:28:20 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 60953 invoked by uid 99); 3 Apr 2014 02:28:17 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 Apr 2014 02:28:17 +0000 Date: Thu, 3 Apr 2014 02:28:17 +0000 (UTC) From: "Tsz Wo Nicholas Sze (JIRA)" To: hdfs-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (HDFS-3333) java.io.IOException: File /user/root/lwr/test31.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-3333?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze resolved HDFS-3333. --------------------------------------- Resolution: Not a Problem I guess that this is not a problem anymore. Please feel free to reopen thi= s if I am wrong. Resolving ... > java.io.IOException: File /user/root/lwr/test31.txt could only be replica= ted to 0 nodes instead of minReplication (=3D1). There are 3 datanode(s) r= unning and 3 node(s) are excluded in this operation. > -------------------------------------------------------------------------= ---------------------------------------------------------------------------= -------------------------------------------------- > > Key: HDFS-3333 > URL: https://issues.apache.org/jira/browse/HDFS-3333 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 0.23.1, 2.0.0-alpha > Environment: namenode=EF=BC=9A1 (IP:10.18.40.154) > datanode=EF=BC=9A3 (IP:10.18.40.154,10.18.40.102,10.18.52.55) > HOST-10-18-40-154:/home/APril20/install/hadoop/namenode/bin # ./hadoop df= sadmin -report > DEPRECATED: Use of this script to execute hdfs command is deprecated. > Instead use the hdfs command for it. > Configured Capacity: 129238446080 (120.36 GB) > Present Capacity: 51742765056 (48.19 GB) > DFS Remaining: 49548591104 (46.15 GB) > DFS Used: 2194173952 (2.04 GB) > DFS Used%: 4.24% > Under replicated blocks: 14831 > Blocks with corrupt replicas: 1 > Missing blocks: 100 > ------------------------------------------------- > Datanodes available: 3 (3 total, 0 dead) > Live datanodes: > Name: 10.18.40.102:50010 (10.18.40.102) > Hostname: linux.site > Decommission Status : Normal > Configured Capacity: 22765834240 (21.2 GB) > DFS Used: 634748928 (605.34 MB) > Non DFS Used: 1762299904 (1.64 GB) > DFS Remaining: 20368785408 (18.97 GB) > DFS Used%: 2.79% > DFS Remaining%: 89.47% > Last contact: Fri Apr 27 10:35:57 IST 2012 > Name: 10.18.40.154:50010 (HOST-10-18-40-154) > Hostname: HOST-10-18-40-154 > Decommission Status : Normal > Configured Capacity: 23259897856 (21.66 GB) > DFS Used: 812396544 (774.76 MB) > Non DFS Used: 8297279488 (7.73 GB) > DFS Remaining: 14150221824 (13.18 GB) > DFS Used%: 3.49% > DFS Remaining%: 60.84% > Last contact: Fri Apr 27 10:35:58 IST 2012 > Name: 10.18.52.55:50010 (10.18.52.55) > Hostname: HOST-10-18-52-55 > Decommission Status : Normal > Configured Capacity: 83212713984 (77.5 GB) > DFS Used: 747028480 (712.42 MB) > Non DFS Used: 67436101632 (62.8 GB) > DFS Remaining: 15029583872 (14 GB) > DFS Used%: 0.9% > DFS Remaining%: 18.06% > Last contact: Fri Apr 27 10:35:58 IST 2012 > Reporter: liaowenrui > Original Estimate: 0.2h > Remaining Estimate: 0.2h > > log4j:WARN No appenders could be found for logger (org.apache.hadoop.metr= ics2.lib.MutableMetricsFactory). > log4j:WARN Please initialize the log4j system properly. > java.io.IOException: File /user/root/lwr/test31.txt could only be replica= ted to 0 nodes instead of minReplication (=3D1). There are 3 datanode(s) r= unning and 3 node(s) are excluded in this operation. > =09at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTa= rget(BlockManager.java:1259) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBl= ock(FSNamesystem.java:1916) > =09at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(N= ameNodeRpcServer.java:472) > =09at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideT= ranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292) > =09at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$= ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.ja= va:42602) > =09at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.c= all(ProtobufRpcEngine.java:428) > =09at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:905) > =09at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688) > =09at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1684) > =09at java.security.AccessController.doPrivileged(Native Method) > =09at javax.security.auth.Subject.doAs(Subject.java:396) > =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfor= mation.java:1205) > =09at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1682) > i:4284 > =09at org.apache.hadoop.ipc.Client.call(Client.java:1159) > =09at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcE= ngine.java:185) > =09at $Proxy9.addBlock(Unknown Source) > =09at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) > =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc= essorImpl.java:25) > =09at java.lang.reflect.Method.invoke(Method.java:597) > =09at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Retr= yInvocationHandler.java:165) > =09at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvoc= ationHandler.java:84) > =09at $Proxy9.addBlock(Unknown Source) > =09at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorP= B.addBlock(ClientNamenodeProtocolTranslatorPB.java:295) > =09at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowing= Block(DFSOutputStream.java:1097) > =09at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutput= Stream(DFSOutputStream.java:973) > =09at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputSt= ream.java:455) > testcase: > import java.io.IOException; > import java.net.URI; > import org.apache.hadoop.conf.Configuration; > import org.apache.hadoop.fs.FSDataOutputStream; > import org.apache.hadoop.fs.Path; > import org.apache.hadoop.hdfs.DFSConfigKeys; > import org.apache.hadoop.hdfs.DistributedFileSystem; > public class Write1 { > =09/** > =09 * @param args > =09 * @throws Exception=20 > =09 */ > =09public static void main(String[] args) throws Exception { > =09=09//System.out.println("main"); > =09=09String hdfsFile=3D"/user/root/lwr/test31.txt"; > =09byte writeBuff[] =3D new byte [1024 * 1024]; > =09int i=3D0; > =09DistributedFileSystem dfs =3D new DistributedFileSystem(); > =09Configuration conf=3Dnew Configuration(); > =09//conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 512); > =09//conf.setLong(DFSConfigKeys.DFS_REPLICATION_KEY, 2); > // conf.setInt("dfs.replication", 3); > conf.setLong("dfs.blocksize", 512); > =09dfs.initialize(URI.create("hdfs://10.18.40.154:9000"), conf); > =09//dfs.delete(new Path(hdfsFile)); > =09 //appendFile(dfs,hdfsFile,1024 * 1024,true); > =09try > =09{ > =09FSDataOutputStream out1=3Ddfs.create(new Path(hdfsFile)); > =09 > =09 for(i=3D0;i<100000;i++) > =09 { > =09 out1.write(writeBuff, 0, 512); > =09} > out1.hsync(); > out1.close(); > /* > =09 FSDataOutputStream out=3Ddfs.append(new Path(hdfsFile),4096); > =09=09out.write(writeBuff, 0, 512 * 1024); > =09=09out.hsync(); > =09=09out.close(); > =09=09*/ > =09}catch (IOException e) > =09{ =09 =20 > =09 System.out.println("i:" + i); > =09 e.printStackTrace(); > =09} > =09finally > =09{ =20 > =09=09 > =09=09System.out.println("i:" + i); > =09 System.out.println("end!"); > =09=20 > =09} > } > =09 > } -- This message was sent by Atlassian JIRA (v6.2#6252)