hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Oskin <stas.os...@gmail.com>
Subject Re: "Too many open files" error, which gets resolved after some time
Date Mon, 22 Jun 2009 20:58:46 GMT
Ok, seems this issue is already patched in the Hadoop distro I'm using
(Cloudera).

Any idea if I still should call GC manually/periodically to clean out all
the stale pipes / epolls?

2009/6/22 Steve Loughran <stevel@apache.org>

> Stas Oskin wrote:
>
>> Hi.
>>
>> So what would be the recommended approach to pre-0.20.x series?
>>
>> To insure each file is used only by one thread, and then it safe to close
>> the handle in that thread?
>>
>> Regards.
>>
>
> good question -I'm not sure. For anythiong you get with FileSystem.get(),
> its now dangerous to close, so try just setting the reference to null and
> hoping that GC will do the finalize() when needed
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message