hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Harris <mhar...@jumptap.com>
Subject Re: What should the open file limit be for hbase
Date Mon, 28 Jan 2008 16:57:33 GMT
My schema is very simple: 4 regions in one table.

create table pagefetch (
    info MAX_VERSIONS=1,
    data MAX_VERSIONS=1,
    headers MAX_VERSIONS=1,
    redirects MAX_VERSIONS=1
);

I am running hadoop in distributed configuration, but with only one data
node.
I am running hbase with two region servers (one of which is one the same
machine as hadoop).
I am seeing the exceptions in my datanode log file, by the way, not my
regionserver log file
"lsof -p REGIONSERVER_PID | wc -l" gave 479
"lsof -p DATANODE_PID | wc -l" gave 10287

- Marc




On Mon, 2008-01-28 at 08:13 -0800, stack wrote:

> Hey Marc:
> 
> You are still seeing 'too many open files'?  Whats your schema look 
> like.  I added to http://wiki.apache.org/hadoop/Hbase/FAQ#5 a rough 
> formula for counting how many open mapfiles in a running regionserver.
> 
> Currently, your only recourse is upping the ulimit.   Addressing this 
> scaling barrier will be a focus of next hbase release.
> 
> St.Ack
> 
> 
> 
> Marc Harris wrote:
> > I have seen that hbase can cause "too many open file" errors. I increase
> > my limit to 10240 (10 times the previous limit) but still get errors.
> >
> > Is there a recommended value that I should set my open files limit to?
> > Is there something else I can do to reduce the number of files, perhaps
> > with some other trade-off?
> >
> > Thanks
> > - Marc
> >
> >
> >   
> 

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message