hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-3592) org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the destination can't be opened
Date Wed, 18 Jun 2008 16:04:45 GMT
org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the destination can't be opened
-----------------------------------------------------------------------------------------------

                 Key: HADOOP-3592
                 URL: https://issues.apache.org/jira/browse/HADOOP-3592
             Project: Hadoop Core
          Issue Type: Bug
          Components: fs
    Affects Versions: 0.18.0
            Reporter: Steve Loughran
            Priority: Minor


FileUtil.copy()  relies on IOUtils.copyBytes() to close the incoming streams, which it does.
Normally.

But if dstFS.create() raises any kind of IOException, then the inputstream "in", which was
created in the line above, will never get closed, and hence be leaked.

      InputStream in = srcFS.open(src);
      OutputStream out = dstFS.create(dst, overwrite);
      IOUtils.copyBytes(in, out, conf, true);

Some try/catch wrapper around the open operations could close the streams if any exception
gets thrown at that point in the copy process.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message