hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "liaoyuxiangqin (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-13246) FileInputStream redundant closes in readReplicasFromCache
Date Thu, 08 Mar 2018 06:21:00 GMT
liaoyuxiangqin created HDFS-13246:

             Summary: FileInputStream redundant closes in readReplicasFromCache 
                 Key: HDFS-13246
                 URL: https://issues.apache.org/jira/browse/HDFS-13246
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: datanode
    Affects Versions: 3.2.0
            Reporter: liaoyuxiangqin

When i read the readReplicasFromCache() of BlockPoolSlice class in datanode, I found the
following code closes fileinputstream redundant, I think  IOUtils.closeStream(inputStream)
in finally code block could guarantee close the inputStream correctly, So the

inputStream.close() can remove. Thanks.

FileInputStream inputStream = null;
    try {
      inputStream = fileIoProvider.getFileInputStream(volume, replicaFile);
      BlockListAsLongs blocksList =
          BlockListAsLongs.readFrom(inputStream, maxDataLength);
      if (blocksList == null) {
        return false;

      for (BlockReportReplica replica : blocksList) {
        switch (replica.getState()) {
        case FINALIZED:
          addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, true);
        case RUR:
        case RBW:
        case RWR:
          addReplicaToReplicasMap(replica, tmpReplicaMap, lazyWriteReplicaMap, false);

      // Now it is safe to add the replica into volumeMap
      // In case of any exception during parsing this cache file, fall back
      // to scan all the files on disk.
      for (Iterator<ReplicaInfo> iter =
          tmpReplicaMap.replicas(bpid).iterator(); iter.hasNext(); ) {
        ReplicaInfo info = iter.next();
        // We use a lightweight GSet to store replicaInfo, we need to remove
        // it from one GSet before adding to another.
        volumeMap.add(bpid, info);
      LOG.info("Successfully read replica from cache file : "
          + replicaFile.getPath());
      return true;
    } catch (Exception e) {
      // Any exception we need to revert back to read from disk
      // Log the error and return false
      LOG.info("Exception occurred while reading the replicas cache file: "
          + replicaFile.getPath(), e );
      return false;
    finally {
      if (!fileIoProvider.delete(volume, replicaFile)) {
        LOG.info("Failed to delete replica cache file: " +

      // close the inputStream


This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org

View raw message