hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: HDFS open file limit
Date Mon, 27 Jan 2014 15:11:51 GMT
Hi John,

There is a concurrent connections limit on the DNs that's set to a default
of 4k max parallel threaded connections for reading or writing blocks. This
is also expandable via configuration but usually the default value suffices
even for pretty large operations given the replicas help spread read load
around.

Beyond this you will mostly just run into configurable OS limitations.
On Jan 26, 2014 11:03 PM, "John Lilley" <john.lilley@redpoint.net> wrote:

>  I have an application that wants to open a large set of files in HDFS
> simultaneously.  Are there hard or practical limits to what can be opened
> at once by a single process?  By the entire cluster in aggregate?
>
> Thanks
>
> John
>
>
>
>
>

Mime
View raw message