hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sudhakara st <sudhakara...@gmail.com>
Subject Re: HDFS open file limit
Date Mon, 27 Jan 2014 11:34:55 GMT
There is no open file limitation for HDFS. The 'Too many open file'  limit
is for OS file system. Increase *system-wide maximum number of open files,
Per-User/Group/Process file descriptor limits.*


On Mon, Jan 27, 2014 at 1:52 AM, Bertrand Dechoux <dechouxb@gmail.com>wrote:

> At least for each machine, there is the *ulimit *that need to be verified.
>
> Regards
>
> Bertrand
>
> Bertrand Dechoux
>
>
> On Sun, Jan 26, 2014 at 6:32 PM, John Lilley <john.lilley@redpoint.net>wrote:
>
>>  I have an application that wants to open a large set of files in HDFS
>> simultaneously.  Are there hard or practical limits to what can be opened
>> at once by a single process?  By the entire cluster in aggregate?
>>
>> Thanks
>>
>> John
>>
>>
>>
>>
>>
>
>


-- 

Regards,
...Sudhakara.st

Mime
View raw message