hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-13663) Should throw exception when incorrect block size is set
Date Fri, 08 Jun 2018 00:53:00 GMT
Yongjun Zhang created HDFS-13663:
------------------------------------

             Summary: Should throw exception when incorrect block size is set
                 Key: HDFS-13663
                 URL: https://issues.apache.org/jira/browse/HDFS-13663
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Yongjun Zhang


See

./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java

{code}
void syncBlock(List<BlockRecord> syncList) throws IOException {


       newBlock.setNumBytes(finalizedLength);
        break;
      case RBW:
      case RWR:
        long minLength = Long.MAX_VALUE;
        for(BlockRecord r : syncList) {
          ReplicaState rState = r.rInfo.getOriginalReplicaState();
          if(rState == bestState) {
            minLength = Math.min(minLength, r.rInfo.getNumBytes());
            participatingList.add(r);
          }
          if (LOG.isDebugEnabled()) {
            LOG.debug("syncBlock replicaInfo: block=" + block +
                ", from datanode " + r.id + ", receivedState=" + rState.name() +
                ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
                bestState.name());
          }
        }
        // recover() guarantees syncList will have at least one replica with RWR
        // or better state.
        assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should throw exception

        newBlock.setNumBytes(minLength);
        break;
      case RUR:
      case TEMPORARY:
        assert false : "bad replica state: " + bestState;
      default:
        break; // we have 'case' all enum values
      }
{code}

when minLength is Long.MAX_VALUE, it should throw exception.

There might be other places like this.

Otherwise, we would see the following WARN in datanode log
{code}
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block xyz because on-disk
length 11852203 is shorter than NameNode recorded length 9223372036854775807
{code}
where 9223372036854775807 is Long.MAX_VALUE.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message