hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Lilley <john.lil...@redpoint.net>
Subject RE: HDFS open file limit
Date Mon, 27 Jan 2014 19:01:48 GMT
What exception would I expect to get if this limit was exceeded?
john

From: Harsh J [mailto:harsh@cloudera.com]
Sent: Monday, January 27, 2014 8:12 AM
To: <user@hadoop.apache.org>
Subject: Re: HDFS open file limit


Hi John,

There is a concurrent connections limit on the DNs that's set to a default of 4k max parallel
threaded connections for reading or writing blocks. This is also expandable via configuration
but usually the default value suffices even for pretty large operations given the replicas
help spread read load around.

Beyond this you will mostly just run into configurable OS limitations.
On Jan 26, 2014 11:03 PM, "John Lilley" <john.lilley@redpoint.net<mailto:john.lilley@redpoint.net>>
wrote:
I have an application that wants to open a large set of files in HDFS simultaneously.  Are
there hard or practical limits to what can be opened at once by a single process?  By the
entire cluster in aggregate?
Thanks
John



Mime
View raw message