hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-1982) Null pointer exception is thrown when NN restarts with a block lesser in size than the block that is present in DN1 but the generation stamp is greater in the NN
Date Tue, 31 May 2011 08:25:47 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13041480#comment-13041480

ramkrishna.s.vasudevan commented on HDFS-1982:

package org.apache.hadoop.hdfs.server.namenode;

import static org.junit.Assert.assertTrue;

import java.io.IOException;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.junit.Test;

public class TestAddStoredBlockForNPE {
  public void testaddStoredBlockShouldInvalidateStaleBlock() throws Exception {
    MiniDFSCluster cluster = null;
    List<String> list = new ArrayList<String>();
    try {
      Configuration conf = new Configuration();
      conf.setInt ( "dfs.block.size", 1024 );
      cluster = new MiniDFSCluster(conf, 2, true, null);
      FileSystem dfs = cluster.getFileSystem();
      String fname = "/test";
      FSDataOutputStream fsdataout = dfs.create(new Path(fname));
      int fileLen = 10 * 1024+94;
      write(fsdataout, 0, fileLen);
      FSNamesystem namesystem = cluster.getNameNode().namesystem;

      LocatedBlocks blockLocations = cluster.getNameNode().getBlockLocations(
	  fname, 0, fileLen);
      List<LocatedBlock> blockList = blockLocations.getLocatedBlocks();
      Block block = blockList.get(blockList.size() - 1).getBlock();
      Block block1 = new Block();
      block1.setGenerationStamp(block.getGenerationStamp() - 10);
      block1.setNumBytes(block.getNumBytes() + 10);

	  block1, null);

    finally {
      if (null != cluster) {
      assertTrue("The flow should have executed without nullpointer exception",
	  list.size() == 0);

  private static void write(OutputStream out, int offset, int length)
      throws IOException {
    final byte[] bytes = new byte[length];
    for (int i = 0; i < length; i++) {
      bytes[i] = (byte) (offset + i);

> Null pointer exception is thrown when NN restarts with a block lesser in size than the
block that is present in DN1 but the generation stamp is greater in the NN 
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------
>                 Key: HDFS-1982
>                 URL: https://issues.apache.org/jira/browse/HDFS-1982
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20-append
>         Environment: Linux
>            Reporter: ramkrishna.s.vasudevan
>             Fix For: 0.20-append
> Conisder the following scenario. 
> WE have a cluster with one NN and 2 DN.
> We write some file.
> One of the block is written in DN1 but not yet completed in DN2 local disk.
> Now DN1 gets killed and so pipeline recovery happens for the block with the size as in
DN2 but the generation stamp gets updated in the NN.
> DN2 also gets killed.
> Now restart NN and DN1
> Now if NN restarts, the block that NN has greater time stamp but the size is lesser in
the NN.
> This leads to Null pointer exception in addstoredblock api

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message