hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Sirianni (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-5792) DFSOutputStream.close() throws exceptions with unintuitive stacktraces
Date Thu, 16 Jan 2014 21:33:26 GMT
Eric Sirianni created HDFS-5792:
-----------------------------------

             Summary: DFSOutputStream.close() throws exceptions with unintuitive stacktraces
                 Key: HDFS-5792
                 URL: https://issues.apache.org/jira/browse/HDFS-5792
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: hdfs-client
            Reporter: Eric Sirianni
            Priority: Minor


Given the following client code:
{code}
class Foo {
  void test() {
    FSDataOutputStream out = ...;
    out.write(...);
    out.close();
  }
}
{code}

A programmer would expect an exception thrown from {{out.close()}} to include the stack trace
of the calling thread:
{noformat}
...
FSDataOutputStream.close()
Foo.test()
...
{noformat}

Instead, it includes the stack trace from the {{DataStreamer}} thread:
{noformat}
java.io.IOException: All datanodes 127.0.0.1:49331 are bad. Aborting...
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:838)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:483)
{noformat}

This makes it difficult to debug the _client_ call stack that actually was unwinded when the
exception was thrown.

A simple fix seems to be modifying {{DFSOutputStream.close()}} to wrap the {{lastException}}
from the {{DataStreamer}} thread in a {{Exception}}, thereby getting both stack traces.  

I can work on a patch for this.  Can someone confirm that my approach is acceptable?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message