hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yiqun Lin (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-10448) CacheManager#checkLimit not correctly
Date Mon, 23 May 2016 02:55:12 GMT
Yiqun Lin created HDFS-10448:

             Summary: CacheManager#checkLimit  not correctly
                 Key: HDFS-10448
                 URL: https://issues.apache.org/jira/browse/HDFS-10448
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: caching
    Affects Versions: 2.7.1
            Reporter: Yiqun Lin
            Assignee: Yiqun Lin

The logic in {{CacheManager#checkLimit}} is not correct. In this method, it does with these
three logic:

First, it will compute needed bytes for the specific path.
CacheDirectiveStats stats = computeNeeded(path, replication);
But the param {{replication}} is not used here. And the bytesNeeded is just one replication's
return new CacheDirectiveStats.Builder()

Second, then it should be multiply by the replication to compare the limit size because the
method {{computeNeeded}} was not used replication.
pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > pool.getLimit()

Third, if we find the size was more than the limit value and then print warning info. It divided
by replication here, while the {{stats.getBytesNeeded()}} was just one replication value.
      throw new InvalidRequestException("Caching path " + path + " of size "
          + stats.getBytesNeeded() / replication + " bytes at replication "
          + replication + " would exceed pool " + pool.getPoolName()
          + "'s remaining capacity of "
          + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org

View raw message