hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: problem to write on HDFS
Date Tue, 15 Mar 2011 14:19:13 GMT
On 14/03/11 17:48, Alessandro Binhara wrote:
> Hello ...
>
> I have a servlet on tomcat.. and it open a hdfs and write simple file with a
> content of post information.
> Well , in first test we had a 14.000 request per second.
> My servet start many trads to write on filesystem.
>
> i got this message on tomcat:
>
> Mar 11, 2011 6:00:20 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
> SEVERE: Socket accept failed
> java.net.SocketException: Too many open files
>
>
> HDFS is slow to write a file?
> How is a better strategy to write on HFDS...
>
> In real aplication we will have a 100.000 request per second to salve in
> hdfs.
>
> thanks..
>

Aaron is right, your expectations of Hadoop HDFS are probably wrong, but 
you haven't got that far yet, you are running out of socket handles, 
which is because you need to increase your ulimits so that more sockets 
can be open. Search for the error string you've seen and you'll find 
details on the problem and various solutions -a problem that has nothing 
to do with Hadoop at all.


Mime
View raw message