hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tony Wu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9236) Missing sanity check for block size during block recovery
Date Thu, 15 Oct 2015 14:48:07 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959008#comment-14959008
] 

Tony Wu commented on HDFS-9236:
-------------------------------

Thanks to [~yzhangal] for offline review and valuable comments! In summary:
* It is difficult come up with a block size limit to enforce on NN. Especially when considering
HDFS allows different files to specify their own block size.
** I will remove the NN side change in the next patch. I would still like to investigate if
we can enforce a per file block size check.
* The sanity check on DN is useful although the chance of hitting the error in a production
cluster is small.




> Missing sanity check for block size during block recovery
> ---------------------------------------------------------
>
>                 Key: HDFS-9236
>                 URL: https://issues.apache.org/jira/browse/HDFS-9236
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: HDFS
>    Affects Versions: 2.7.1
>            Reporter: Tony Wu
>            Assignee: Tony Wu
>         Attachments: HDFS-9236.001.patch
>
>
> Ran into an issue while running test against faulty data-node code. 
> Currently in DataNode.java:
> {code:java}
>   /** Block synchronization */
>   void syncBlock(RecoveringBlock rBlock,
>                          List<BlockRecord> syncList) throws IOException {
> …
>     // Calculate the best available replica state.
>     ReplicaState bestState = ReplicaState.RWR;
> …
>     // Calculate list of nodes that will participate in the recovery
>     // and the new block size
>     List<BlockRecord> participatingList = new ArrayList<BlockRecord>();
>     final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
>         -1, recoveryId);
>     switch(bestState) {
> …
>     case RBW:
>     case RWR:
>       long minLength = Long.MAX_VALUE;
>       for(BlockRecord r : syncList) {
>         ReplicaState rState = r.rInfo.getOriginalReplicaState();
>         if(rState == bestState) {
>           minLength = Math.min(minLength, r.rInfo.getNumBytes());
>           participatingList.add(r);
>         }
>       }
>       newBlock.setNumBytes(minLength);
>       break;
> …
>     }
> …
>     nn.commitBlockSynchronization(block,
>         newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
>         datanodes, storages);
>   }
> {code}
> This code is called by the DN coordinating the block recovery. In the above case, it
is possible for none of the rState (reported by DNs with copies of the replica being recovered)
to match the bestState. This can either be caused by faulty DN code or stale/modified/corrupted
files on DN. When this happens the DN will end up reporting the minLengh of Long.MAX_VALUE.
> Unfortunately there is no check on the NN for replica length. See FSNamesystem.java:
> {code:java}
>   void commitBlockSynchronization(ExtendedBlock oldBlock,
>       long newgenerationstamp, long newlength,
>       boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
>       String[] newtargetstorages) throws IOException {
> …
>       if (deleteblock) {
>         Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
>         boolean remove = iFile.removeLastBlock(blockToDel) != null;
>         if (remove) {
>           blockManager.removeBlock(storedBlock);
>         }
>       } else {
>         // update last block
>         if(!copyTruncate) {
>           storedBlock.setGenerationStamp(newgenerationstamp);
>           
>           //>>>> XXX block length is updated without any check <<<<//
>           storedBlock.setNumBytes(newlength);
>         }
> …
>     if (closeFile) {
>       LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
>           + ", file=" + src
>           + (copyTruncate ? ", newBlock=" + truncatedBlock
>               : ", newgenerationstamp=" + newgenerationstamp)
>           + ", newlength=" + newlength
>           + ", newtargets=" + Arrays.asList(newtargets) + ") successful");
>     } else {
>       LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
>     }
>   }
> {code}
> After this point the block length becomes Long.MAX_VALUE. Any subsequent block report
(even with correct length) will cause the block to be marked as corrupted. Since this is block
could be the last block of the file. If this happens and the client goes away, NN won’t
be able to recover the lease and close the file because the last block is under-replicated.
> I believe we need to have a sanity check for block size on both DN and NN to prevent
such case from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message