hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lars Hofhansl (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5461) fallback to non-ssr(local short circuit reads) while oom detected
Date Wed, 13 Nov 2013 17:33:25 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821560#comment-13821560
] 

Lars Hofhansl commented on HDFS-5461:
-------------------------------------

The issue is that the JDK only collects direct byte buffers during a full GC, and there are
different limits for the direct buffer and the general heap. HBase keeps a reader open for
each store file and thus we end up with a lot of direct memory used.

I was actually curious about 1mb as the default size; it seems even as little 8kb should be
OK.

> fallback to non-ssr(local short circuit reads) while oom detected
> -----------------------------------------------------------------
>
>                 Key: HDFS-5461
>                 URL: https://issues.apache.org/jira/browse/HDFS-5461
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Liang Xie
>            Assignee: Liang Xie
>         Attachments: HDFS-5461.txt
>
>
> Currently, the DirectBufferPool used by ssr feature seems doesn't have a upper-bound
limit except DirectMemory VM option. So there's a risk to encounter direct memory oom. see
HBASE-8143 for example.
> IMHO, maybe we could improve it a bit:
> 1) detect OOM or reach a setting up-limit from caller, then fallback to non-ssr
> 2) add a new metric about current raw consumed direct memory size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message