hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9236) Missing sanity check for block size during block recovery
Date Wed, 04 Nov 2015 04:37:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988859#comment-14988859
] 

Hadoop QA commented on HDFS-9236:
---------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s {color} | {color:blue}
docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green}
The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color}
| {color:green} The patch appears to include 1 new or modified test files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 10s {color}
| {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} |
{color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} |
{color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color}
| {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color}
| {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s {color} | {color:red}
hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs {color}
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s {color} |
{color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s {color} | {color:green}
trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s {color}
| {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} |
{color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green}
the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s {color} |
{color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s {color} | {color:green}
the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color}
| {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color}
| {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color}
| {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s {color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s {color} |
{color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s {color} |
{color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 46s {color} | {color:red}
hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 43s {color} | {color:red}
hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s {color} | {color:red}
Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 162m 19s {color} | {color:black}
{color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
| JDK v1.7.0_79 Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 Image:test-patch-base-hadoop-date2015-11-04 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12770451/HDFS-9236.006.patch
|
| JIRA Issue | HDFS-9236 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  checkstyle  compile
 |
| uname | Linux 93973b5919f8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12
UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-d0f6847/precommit/personality/hadoop.sh
|
| git revision | trunk / 194251c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79
|
| findbugs | v3.0.0 |
| findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/13370/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
|
| unit | https://builds.apache.org/job/PreCommit-HDFS-Build/13370/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
|
| unit | https://builds.apache.org/job/PreCommit-HDFS-Build/13370/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
|
| unit test logs |  https://builds.apache.org/job/PreCommit-HDFS-Build/13370/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
https://builds.apache.org/job/PreCommit-HDFS-Build/13370/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
|
| JDK v1.7.0_79  Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13370/testReport/
|
| asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/13370/artifact/patchprocess/patch-asflicense-problems.txt
|
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
| Max memory used | 226MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13370/console |


This message was automatically generated.



> Missing sanity check for block size during block recovery
> ---------------------------------------------------------
>
>                 Key: HDFS-9236
>                 URL: https://issues.apache.org/jira/browse/HDFS-9236
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: HDFS
>    Affects Versions: 2.7.1
>            Reporter: Tony Wu
>            Assignee: Tony Wu
>         Attachments: HDFS-9236.001.patch, HDFS-9236.002.patch, HDFS-9236.003.patch, HDFS-9236.004.patch,
HDFS-9236.005.patch, HDFS-9236.006.patch
>
>
> Ran into an issue while running test against faulty data-node code. 
> Currently in DataNode.java:
> {code:java}
>   /** Block synchronization */
>   void syncBlock(RecoveringBlock rBlock,
>                          List<BlockRecord> syncList) throws IOException {
> …
>     // Calculate the best available replica state.
>     ReplicaState bestState = ReplicaState.RWR;
> …
>     // Calculate list of nodes that will participate in the recovery
>     // and the new block size
>     List<BlockRecord> participatingList = new ArrayList<BlockRecord>();
>     final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
>         -1, recoveryId);
>     switch(bestState) {
> …
>     case RBW:
>     case RWR:
>       long minLength = Long.MAX_VALUE;
>       for(BlockRecord r : syncList) {
>         ReplicaState rState = r.rInfo.getOriginalReplicaState();
>         if(rState == bestState) {
>           minLength = Math.min(minLength, r.rInfo.getNumBytes());
>           participatingList.add(r);
>         }
>       }
>       newBlock.setNumBytes(minLength);
>       break;
> …
>     }
> …
>     nn.commitBlockSynchronization(block,
>         newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
>         datanodes, storages);
>   }
> {code}
> This code is called by the DN coordinating the block recovery. In the above case, it
is possible for none of the rState (reported by DNs with copies of the replica being recovered)
to match the bestState. This can either be caused by faulty DN code or stale/modified/corrupted
files on DN. When this happens the DN will end up reporting the minLengh of Long.MAX_VALUE.
> Unfortunately there is no check on the NN for replica length. See FSNamesystem.java:
> {code:java}
>   void commitBlockSynchronization(ExtendedBlock oldBlock,
>       long newgenerationstamp, long newlength,
>       boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
>       String[] newtargetstorages) throws IOException {
> …
>       if (deleteblock) {
>         Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
>         boolean remove = iFile.removeLastBlock(blockToDel) != null;
>         if (remove) {
>           blockManager.removeBlock(storedBlock);
>         }
>       } else {
>         // update last block
>         if(!copyTruncate) {
>           storedBlock.setGenerationStamp(newgenerationstamp);
>           
>           //>>>> XXX block length is updated without any check <<<<//
>           storedBlock.setNumBytes(newlength);
>         }
> …
>     if (closeFile) {
>       LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
>           + ", file=" + src
>           + (copyTruncate ? ", newBlock=" + truncatedBlock
>               : ", newgenerationstamp=" + newgenerationstamp)
>           + ", newlength=" + newlength
>           + ", newtargets=" + Arrays.asList(newtargets) + ") successful");
>     } else {
>       LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
>     }
>   }
> {code}
> After this point the block length becomes Long.MAX_VALUE. Any subsequent block report
(even with correct length) will cause the block to be marked as corrupted. Since this is block
could be the last block of the file. If this happens and the client goes away, NN won’t
be able to recover the lease and close the file because the last block is under-replicated.
> I believe we need to have a sanity check for block size on both DN and NN to prevent
such case from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message