hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9236) Missing sanity check for block size during block recovery
Date Thu, 15 Oct 2015 21:37:05 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959676#comment-14959676
] 

Hadoop QA commented on HDFS-9236:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 43s | Pre-patch trunk has 1 extant Findbugs (version
3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any @author tags.
|
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to include 1 new
or modified test files. |
| {color:green}+1{color} | javac |   8m 12s | There were no new javac warning messages. |
| {color:green}+1{color} | javadoc |  10m 35s | There were no new javadoc warning messages.
|
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does not increase
the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 28s | The applied patch generated  2 new checkstyle
issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that end in whitespace.
|
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with eclipse:eclipse.
|
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce any new Findbugs
(version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |  49m 37s | Tests passed in hadoop-hdfs. |
| | |  96m 54s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12766845/HDFS-9236.003.patch
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 8d2d3eb |
| Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13008/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
|
| checkstyle |  https://builds.apache.org/job/PreCommit-HDFS-Build/13008/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
|
| hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13008/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13008/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep
3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13008/console |


This message was automatically generated.

> Missing sanity check for block size during block recovery
> ---------------------------------------------------------
>
>                 Key: HDFS-9236
>                 URL: https://issues.apache.org/jira/browse/HDFS-9236
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: HDFS
>    Affects Versions: 2.7.1
>            Reporter: Tony Wu
>            Assignee: Tony Wu
>         Attachments: HDFS-9236.001.patch, HDFS-9236.002.patch, HDFS-9236.003.patch
>
>
> Ran into an issue while running test against faulty data-node code. 
> Currently in DataNode.java:
> {code:java}
>   /** Block synchronization */
>   void syncBlock(RecoveringBlock rBlock,
>                          List<BlockRecord> syncList) throws IOException {
> …
>     // Calculate the best available replica state.
>     ReplicaState bestState = ReplicaState.RWR;
> …
>     // Calculate list of nodes that will participate in the recovery
>     // and the new block size
>     List<BlockRecord> participatingList = new ArrayList<BlockRecord>();
>     final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
>         -1, recoveryId);
>     switch(bestState) {
> …
>     case RBW:
>     case RWR:
>       long minLength = Long.MAX_VALUE;
>       for(BlockRecord r : syncList) {
>         ReplicaState rState = r.rInfo.getOriginalReplicaState();
>         if(rState == bestState) {
>           minLength = Math.min(minLength, r.rInfo.getNumBytes());
>           participatingList.add(r);
>         }
>       }
>       newBlock.setNumBytes(minLength);
>       break;
> …
>     }
> …
>     nn.commitBlockSynchronization(block,
>         newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
>         datanodes, storages);
>   }
> {code}
> This code is called by the DN coordinating the block recovery. In the above case, it
is possible for none of the rState (reported by DNs with copies of the replica being recovered)
to match the bestState. This can either be caused by faulty DN code or stale/modified/corrupted
files on DN. When this happens the DN will end up reporting the minLengh of Long.MAX_VALUE.
> Unfortunately there is no check on the NN for replica length. See FSNamesystem.java:
> {code:java}
>   void commitBlockSynchronization(ExtendedBlock oldBlock,
>       long newgenerationstamp, long newlength,
>       boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
>       String[] newtargetstorages) throws IOException {
> …
>       if (deleteblock) {
>         Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
>         boolean remove = iFile.removeLastBlock(blockToDel) != null;
>         if (remove) {
>           blockManager.removeBlock(storedBlock);
>         }
>       } else {
>         // update last block
>         if(!copyTruncate) {
>           storedBlock.setGenerationStamp(newgenerationstamp);
>           
>           //>>>> XXX block length is updated without any check <<<<//
>           storedBlock.setNumBytes(newlength);
>         }
> …
>     if (closeFile) {
>       LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
>           + ", file=" + src
>           + (copyTruncate ? ", newBlock=" + truncatedBlock
>               : ", newgenerationstamp=" + newgenerationstamp)
>           + ", newlength=" + newlength
>           + ", newtargets=" + Arrays.asList(newtargets) + ") successful");
>     } else {
>       LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
>     }
>   }
> {code}
> After this point the block length becomes Long.MAX_VALUE. Any subsequent block report
(even with correct length) will cause the block to be marked as corrupted. Since this is block
could be the last block of the file. If this happens and the client goes away, NN won’t
be able to recover the lease and close the file because the last block is under-replicated.
> I believe we need to have a sanity check for block size on both DN and NN to prevent
such case from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message