Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 04CFCD99A for ; Mon, 21 Jan 2013 07:10:16 +0000 (UTC) Received: (qmail 59837 invoked by uid 500); 21 Jan 2013 07:10:14 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 59327 invoked by uid 500); 21 Jan 2013 07:10:13 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 59156 invoked by uid 99); 21 Jan 2013 07:10:13 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 21 Jan 2013 07:10:13 +0000 Date: Mon, 21 Jan 2013 07:10:13 +0000 (UTC) From: "Li Junjun (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-4424) fsdataset Mkdirs failed cause nullpointexception and other bad consequence MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Junjun updated HDFS-4424: ---------------------------- Description: File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java from line 205: if (children == null || children.length == 0) { children = new FSDir[maxBlocksPerDir]; for (int idx = 0; idx < maxBlocksPerDir; idx++) { children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)); } } in FSDir constructer method if faild ( space full,so mkdir fails ), but the children still in use ! the the write comes(after I run balancer ) , when choose FSDir line 192: File file = children[idx].addBlock(b, src, false, resetIdx); cause exceptions like this at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158) ------------------------------------------------ should it like this if (children == null || children.length == 0) { List childrenList = new ArrayList(); for (int idx = 0; idx < maxBlocksPerDir; idx++) { try{ childrenList .add( new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx))); }catch(Exception e){ } children = childrenList.toArray(); } } ---------------------------- bad consequence , in my cluster ,this datanode's num blocks became 0 . was: from line 205: if (children == null || children.length == 0) { children = new FSDir[maxBlocksPerDir]; for (int idx = 0; idx < maxBlocksPerDir; idx++) { children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)); } } in FSDir constructer method if faild ( space full,so mkdir fails ), but the children still in use ! the the write comes(after I run balancer ) , when choose FSDir line 192: File file = children[idx].addBlock(b, src, false, resetIdx); cause exceptions like this at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158) ------------------------------------------------ should it like this if (children == null || children.length == 0) { List childrenList = new ArrayList(); for (int idx = 0; idx < maxBlocksPerDir; idx++) { try{ childrenList .add( new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx))); }catch(Exception e){ } children = childrenList.toArray(); } } ---------------------------- bad consequence , in my cluster ,this datanode's num blocks became 0 . > fsdataset Mkdirs failed cause nullpointexception and other bad consequence > ------------------------------------------------------------------------------- > > Key: HDFS-4424 > URL: https://issues.apache.org/jira/browse/HDFS-4424 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 1.0.1 > Reporter: Li Junjun > > File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java > from line 205: > if (children == null || children.length == 0) { > children = new FSDir[maxBlocksPerDir]; > for (int idx = 0; idx < maxBlocksPerDir; idx++) { > children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)); > } > } > in FSDir constructer method if faild ( space full,so mkdir fails ), but the children still in use ! > the the write comes(after I run balancer ) , when choose FSDir > line 192: > File file = children[idx].addBlock(b, src, false, resetIdx); > cause exceptions like this > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) > at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158) > ------------------------------------------------ > should it like this > if (children == null || children.length == 0) { > List childrenList = new ArrayList(); > > for (int idx = 0; idx < maxBlocksPerDir; idx++) { > try{ > childrenList .add( new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx))); > }catch(Exception e){ > } > children = childrenList.toArray(); > } > } > ---------------------------- > bad consequence , in my cluster ,this datanode's num blocks became 0 . -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira