hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raghu Angadi <rang...@yahoo-inc.com>
Subject Re: "Too many open files" error, which gets resolved after some time
Date Tue, 23 Jun 2009 15:41:58 GMT
Stas Oskin wrote:
> Hi.
> 
> Any idea if calling System.gc() periodically will help reducing the amount
> of pipes / epolls?

since you have HADOOP-4346, you should not have excessive epoll/pipe fds 
open. First of all do you still have the problem? If yes, how many 
hadoop streams do you have at a time?

System.gc() won't help if you have HADOOP-4346.

Ragu.

> Thanks for your opinion!
> 
> 2009/6/22 Stas Oskin <stas.oskin@gmail.com>
> 
>> Ok, seems this issue is already patched in the Hadoop distro I'm using
>> (Cloudera).
>>
>> Any idea if I still should call GC manually/periodically to clean out all
>> the stale pipes / epolls?
>>
>> 2009/6/22 Steve Loughran <stevel@apache.org>
>>
>>> Stas Oskin wrote:
>>>
>>>  Hi.
>>>> So what would be the recommended approach to pre-0.20.x series?
>>>>
>>>> To insure each file is used only by one thread, and then it safe to close
>>>> the handle in that thread?
>>>>
>>>> Regards.
>>>>
>>> good question -I'm not sure. For anythiong you get with FileSystem.get(),
>>> its now dangerous to close, so try just setting the reference to null and
>>> hoping that GC will do the finalize() when needed
>>>
> 


Mime
View raw message