hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Sirianni (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-5792) DFSOutputStream.close() throws exceptions with unintuitive stacktraces
Date Thu, 16 Jan 2014 21:33:26 GMT
Eric Sirianni created HDFS-5792:

             Summary: DFSOutputStream.close() throws exceptions with unintuitive stacktraces
                 Key: HDFS-5792
                 URL: https://issues.apache.org/jira/browse/HDFS-5792
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: hdfs-client
            Reporter: Eric Sirianni
            Priority: Minor

Given the following client code:
class Foo {
  void test() {
    FSDataOutputStream out = ...;

A programmer would expect an exception thrown from {{out.close()}} to include the stack trace
of the calling thread:

Instead, it includes the stack trace from the {{DataStreamer}} thread:
java.io.IOException: All datanodes are bad. Aborting...
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:838)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:483)

This makes it difficult to debug the _client_ call stack that actually was unwinded when the
exception was thrown.

A simple fix seems to be modifying {{DFSOutputStream.close()}} to wrap the {{lastException}}
from the {{DataStreamer}} thread in a {{Exception}}, thereby getting both stack traces.  

I can work on a patch for this.  Can someone confirm that my approach is acceptable?

This message was sent by Atlassian JIRA

View raw message