hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Dagaev <michael.dag...@gmail.com>
Subject Re: Too many open files
Date Tue, 20 Jan 2009 16:26:37 GMT
Hi, Stack

> The 'Getting Started' for hbase advises upping file descriptors.

Yes, I should have done that.

> The 'files' are remote, over a TCP socket, on hdfs datanodes.

I see.

>> As I see, a region server holds: ~150 open epolls, ~300 open pipes,
>> ~150 open TCP connections to itself (port 50010).
>> Is it ok? Why does a region server need so many IPCs?
>> Why does it use TCP connections as local IPC? Isn't it too expensive?
> It has a socket per open file.  Its how hdfs works currently.
> The local connections are probably the regionserver talking to the local datanode.


>> Now let's say that the region server run out of file descriptors and
>> cannot open
>> a new IPC. Can it continue working using ~600 IPCs it opened before?
> No.  It will fail.  Up your FDs.

Anyway, it is strange that a region server needs so many FDs while we have
only few column families. I will try to monitor FDs and gather more
info about it.

Thank you for your cooperation,

View raw message