hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12612) DFSStripedOutputStream#close will throw if called a second time with a failed streamer
Date Fri, 13 Oct 2017 23:22:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16204340#comment-16204340
] 

Hadoop QA commented on HDFS-12612:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 11s{color} | {color:blue}
Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  0s{color} |
{color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m  0s{color}
| {color:green} The patch appears to include 1 new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  8s{color} | {color:blue}
Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 14s{color}
| {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 33s{color} |
{color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 43s{color}
| {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 31s{color} |
{color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m  4s{color}
| {color:green} branch has no errors when building and testing our client artifacts. {color}
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 24s{color} |
{color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  9s{color} |
{color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  7s{color} | {color:blue}
Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 34s{color}
| {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 45s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 45s{color} | {color:green}
the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  0m 47s{color}
| {color:orange} hadoop-hdfs-project: The patch generated 1 new + 82 unchanged - 0 fixed =
83 total (was 82) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 38s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m  0s{color}
| {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  9m 55s{color}
| {color:green} patch has no errors when building and testing our client artifacts. {color}
|
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 40s{color} | {color:red}
hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was
0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  6s{color} |
{color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 12s{color} | {color:green}
hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m  6s{color} | {color:red}
hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 21s{color}
| {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 17s{color} | {color:black}
{color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Class org.apache.hadoop.hdfs.DataStreamer$LastException is not derived from an Exception,
even though it is named as such  At DataStreamer.java:from an Exception, even though it is
named as such  At DataStreamer.java:[lines 288-314] |
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | HDFS-12612 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892139/HDFS-12612.01.patch
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  unit  shadedclient
 findbugs  checkstyle  |
| uname | Linux bbe909f42d85 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e163f41 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21694/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
|
| findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/21694/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
|
| unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21694/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
|  Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21694/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
|
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21694/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DFSStripedOutputStream#close will throw if called a second time with a failed streamer
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-12612
>                 URL: https://issues.apache.org/jira/browse/HDFS-12612
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>    Affects Versions: 3.0.0-beta1
>            Reporter: Andrew Wang
>            Assignee: Lei (Eddy) Xu
>              Labels: hdfs-ec-3.0-must-do
>         Attachments: HDFS-12612.00.patch, HDFS-12612.01.patch
>
>
> Found while testing with Hive. We have a cluster with 2 DNs and the XOR-2-1 policy. If
you write a file and call close() twice, it throws this exception:
> {noformat}
> 17/10/04 16:02:14 WARN hdfs.DFSOutputStream: Cannot allocate parity block(index=2, policy=XOR-2-1-1024k).
Not enough datanodes? Exclude nodes=[]
> ...
> Caused by: java.io.IOException: Failed to get parity block, index=2
> at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:500)
~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:524)
~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
> {noformat}
> This is because in DFSStripedOutputStream#closeImpl, if the stream is closed, we throw
an exception if any of the striped streamers had an exception:
> {code}
>   protected synchronized void closeImpl() throws IOException {
>     if (isClosed()) {
>       final MultipleIOException.Builder b = new MultipleIOException.Builder();
>       for(int i = 0; i < streamers.size(); i++) {
>         final StripedDataStreamer si = getStripedDataStreamer(i);
>         try {
>           si.getLastException().check(true);
>         } catch (IOException e) {
>           b.add(e);
>         }
>       }
>       final IOException ioe = b.build();
>       if (ioe != null) {
>         throw ioe;
>       }
>       return;
>     }
> {code}
> I think this is incorrect, since we only need to throw in this situation if we have too
many failed streamers. close should also be idempotent, so it should throw the first time
we call close if it's going to throw at all.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message