hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-2926) Ignoring IOExceptions on close
Date Mon, 03 Mar 2008 18:14:51 GMT
Ignoring IOExceptions on close
------------------------------

                 Key: HADOOP-2926
                 URL: https://issues.apache.org/jira/browse/HADOOP-2926
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.16.0
            Reporter: Owen O'Malley
            Assignee: dhruba borthakur
            Priority: Critical
             Fix For: 0.16.1


Currently in HDFS there are a lot of calls to IOUtils.closeStream that are from finally blocks.
I'm worried that this can lead to data corruption in the file system. Take the first instance
in DataNode.copyBlock: it writes the block and then calls closeStream on the output stream.
If there is an error at the end of the file that is detected in the close, it will be *completely*
ignored. Note that logging the error is not enough, the error should be thrown so that the
client knows the failure happened.

{code}
   try {
     file1.write(...);
     file2.write(...);
   } finally {
      IOUtils.closeStream(file);
  }
{code}

is *bad*. It must be rewritten as:

{code}
   try {
     file1.write(...);
     file2.write(...);
     file1.close(...);
     file2.close(...);
   } catch (IOException ie) {
     IOUtils.closeStream(file1);
     IOUtils.closeStream(file2);
     throw ie;
   }
{code}

I also think that IOUtils.closeStream should be renamed IOUtils.cleanupFailedStream or something
to make it clear it can only be used after the write operation has failed and is being cleaned
up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message