hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10448) CacheManager#checkLimit not correctly
Date Mon, 23 May 2016 20:57:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297068#comment-15297068
] 

Colin Patrick McCabe commented on HDFS-10448:
---------------------------------------------

This is a good find.  I think that {{computeNeeded}} should take replication into account--
the fact that it doesn't currently is a bug.  Then there would be no need to change the callers
of {{computeNeeded}}.

> CacheManager#checkLimit  not correctly
> --------------------------------------
>
>                 Key: HDFS-10448
>                 URL: https://issues.apache.org/jira/browse/HDFS-10448
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: caching
>    Affects Versions: 2.7.1
>            Reporter: Yiqun Lin
>            Assignee: Yiqun Lin
>         Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it does with
these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just one replication's
vaue.
> {code}
> return new CacheDirectiveStats.Builder()
>         .setBytesNeeded(requestedBytes)
>         .setFilesCached(requestedFiles)
>         .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit size because
the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print warning info.
It divided by replication here, while the {{stats.getBytesNeeded()}} was just one replication
value.
> {code}
>       throw new InvalidRequestException("Caching path " + path + " of size "
>           + stats.getBytesNeeded() / replication + " bytes at replication "
>           + replication + " would exceed pool " + pool.getPoolName()
>           + "'s remaining capacity of "
>           + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message