hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-24) Scaling: Too many open file handles to datanodes
Date Sat, 12 Jul 2008 17:15:31 GMT

    [ https://issues.apache.org/jira/browse/HBASE-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613104#action_12613104
] 

stack commented on HBASE-24:
----------------------------

Thanks for doing the profiling LN.  Do you think we should put an upper bound on the number
of regions a particular regionserver can carry at any one time or an upper bound on number
of open Readers?  I wonder, if you want to carry many regions, if lowering the compaction
threshold from 3 to 2 -- or even to 1 -- would make any difference in our memory profile (at
a CPU cost)?  We load the index and keep it around to avoid doing it on each random access
-- maybe if a bounded pool of open MapFiles, we could move files in and out of the pool on
some kind of LRU basis?

> Scaling: Too many open file handles to datanodes
> ------------------------------------------------
>
>                 Key: HBASE-24
>                 URL: https://issues.apache.org/jira/browse/HBASE-24
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: stack
>            Priority: Critical
>
> We've been here before (HADOOP-2341).
> Today the rapleaf gave me an lsof listing from a regionserver.  Had thousands of open
sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state.  On average they seem to have
about ten file descriptors/sockets open per region (They have 3 column families IIRC.  Per
family, can have between 1-5 or so mapfiles open per family -- 3 is max... but compacting
we open a new one, etc.).
> They have thousands of regions.   400 regions -- ~100G, which is not that much -- takes
about 4k open file handles.
> If they want a regionserver to server a decent disk worths -- 300-400G -- then thats
maybe 1600 regions... 16k file handles.  If more than just 3 column families..... then we
are in danger of blowing out limits if they are 32k.
> We've been here before with HADOOP-2341.
> A dfsclient that used non-blocking i/o would help applications like hbase (The datanode
doesn't have this problem as bad -- CLOSE_WAIT on regionserver side, the bulk of the open
fds in the rapleaf log, don't have a corresponding open resource on datanode end).
> Could also just open mapfiles as needed, but that'd kill our random read performance
and its bad enough already.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message