hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-4424) fsdataset Mkdirs failed cause nullpointexception and other bad consequence
Date Thu, 24 Jan 2013 23:31:12 GMT

     [ https://issues.apache.org/jira/browse/HDFS-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tsz Wo (Nicholas), SZE updated HDFS-4424:
-----------------------------------------

    Description: 
File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java

from line 205:
{code}
      if (children == null || children.length == 0) {
        children = new FSDir[maxBlocksPerDir];
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx));
        }
      }
{code}
in FSDir constructer method if faild (  space full,so mkdir fails    ), but  the children
still in use !


the the write comes(after I run balancer ) , when choose FSDir 

line 192:
    File file = children[idx].addBlock(b, src, false, resetIdx);

 cause exceptions like this

	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)




------------------------------------------------
should it like this 

{code}
      if (children == null || children.length == 0) {
          List childrenList = new ArrayList();
        
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          try{
           childrenList .add( new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)));
          }catch(Exception e){
          }
          children = childrenList.toArray();
        }
      }
{code}



----------------------------
bad consequence , in my cluster ,this datanode's num blocks became 0 .














  was:
File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java

from line 205:

      if (children == null || children.length == 0) {
        children = new FSDir[maxBlocksPerDir];
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx));
        }
      }
in FSDir constructer method if faild (  space full,so mkdir fails    ), but  the children
still in use !


the the write comes(after I run balancer ) , when choose FSDir 

line 192:
    File file = children[idx].addBlock(b, src, false, resetIdx);

 cause exceptions like this

	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)




------------------------------------------------
should it like this 

      if (children == null || children.length == 0) {
          List childrenList = new ArrayList();
        
        for (int idx = 0; idx < maxBlocksPerDir; idx++) {
          try{
           childrenList .add( new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)));
          }catch(Exception e){
          }
          children = childrenList.toArray();
        }
      }




----------------------------
bad consequence , in my cluster ,this datanode's num blocks became 0 .














    
> fsdataset  Mkdirs failed  cause  nullpointexception and other bad  consequence 
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-4424
>                 URL: https://issues.apache.org/jira/browse/HDFS-4424
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 1.0.1
>            Reporter: Li Junjun
>
> File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
> from line 205:
> {code}
>       if (children == null || children.length == 0) {
>         children = new FSDir[maxBlocksPerDir];
>         for (int idx = 0; idx < maxBlocksPerDir; idx++) {
>           children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx));
>         }
>       }
> {code}
> in FSDir constructer method if faild (  space full,so mkdir fails    ), but  the children
still in use !
> the the write comes(after I run balancer ) , when choose FSDir 
> line 192:
>     File file = children[idx].addBlock(b, src, false, resetIdx);
>  cause exceptions like this
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158)
> ------------------------------------------------
> should it like this 
> {code}
>       if (children == null || children.length == 0) {
>           List childrenList = new ArrayList();
>         
>         for (int idx = 0; idx < maxBlocksPerDir; idx++) {
>           try{
>            childrenList .add( new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)));
>           }catch(Exception e){
>           }
>           children = childrenList.toArray();
>         }
>       }
> {code}
> ----------------------------
> bad consequence , in my cluster ,this datanode's num blocks became 0 .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message