hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10448) CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication factors other than 1
Date Tue, 21 Jun 2016 01:24:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15340851#comment-15340851
] 

Colin Patrick McCabe commented on HDFS-10448:
---------------------------------------------

Hi [~linyiqun],

Sorry, I misread the patch the first time around.  You are indeed changing computeNeeded to
take the replication factor into account, which seems like a better way to go.

+1

> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication
factors other than 1
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10448
>                 URL: https://issues.apache.org/jira/browse/HDFS-10448
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: caching
>    Affects Versions: 2.7.1
>            Reporter: Yiqun Lin
>            Assignee: Yiqun Lin
>         Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it does with
these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just one replication's
vaue.
> {code}
> return new CacheDirectiveStats.Builder()
>         .setBytesNeeded(requestedBytes)
>         .setFilesCached(requestedFiles)
>         .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit size because
the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print warning info.
It divided by replication here, while the {{stats.getBytesNeeded()}} was just one replication
value.
> {code}
>       throw new InvalidRequestException("Caching path " + path + " of size "
>           + stats.getBytesNeeded() / replication + " bytes at replication "
>           + replication + " would exceed pool " + pool.getPoolName()
>           + "'s remaining capacity of "
>           + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message