hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ryan rawson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1002) Secondary Name Node crash, NPE in edit log replay
Date Wed, 17 Mar 2010 18:51:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846525#action_12846525
] 

ryan rawson commented on HDFS-1002:
-----------------------------------

we create the directory first, here is the relevant code:

    if (!fs.exists(dir)) {
      fs.mkdirs(dir);
    }
    Path path = getUniqueFile(fs, dir);
    return new HFile.Writer(fs, path, blocksize,
      algorithm == null? HFile.DEFAULT_COMPRESSION_ALGORITHM: algorithm,
      c == null? KeyValue.KEY_COMPARATOR: c);

(StoreFile.java:398 in HBase)


In this case "getUniqueFile" generates that "9207375265821366914" filename.  The constructor
of HFile.Writer() will create the file using "fs.create(path)".

Also 'dir' == "/hbase/stumbles_by_userid/compaction.dir/378232123" in this context. 

So yes we create the directory before creating the file.

> Secondary Name Node crash, NPE in edit log replay
> -------------------------------------------------
>
>                 Key: HDFS-1002
>                 URL: https://issues.apache.org/jira/browse/HDFS-1002
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.21.0
>            Reporter: ryan rawson
>             Fix For: 0.21.0
>
>         Attachments: snn_crash.tar.gz, snn_log.txt
>
>
> An NPE in SNN, the core of the message looks like yay so:
> 2010-02-25 11:54:05,834 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
java.lang.NullPointerException
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1152)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1164)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1067)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:213)
>         at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadEditRecords(FSEditLog.java:511)
>         at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:401)
>         at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:368)
>         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1172)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:594)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$000(SecondaryNameNode.java:476)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:353)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:317)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:219)
>         at java.lang.Thread.run(Thread.java:619)
> This happens even if I restart SNN over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message