hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <qwertyman...@gmail.com>
Subject Re: problem to write on HDFS
Date Mon, 14 Mar 2011 18:02:05 GMT
You're running into ulimit issues (You have hit the number of open
files allowed for a/the user). It is not an uncommon problem, and some
web-searching should help you solve it :)

http://www.google.com/search?q=Too+many+open+files

Also, do remember to close your open files and file-system connections
when you're done with them for a task.

On Mon, Mar 14, 2011 at 11:18 PM, Alessandro Binhara <binhara@gmail.com> wrote:
> Hello ...
> I have a servlet on tomcat.. and it open a hdfs and write simple file with a
> content of post information.
> Well , in first test we had a 14.000 request per second.
> My servet start many trads to write on filesystem.
> i got this message on tomcat:
> Mar 11, 2011 6:00:20 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
> SEVERE: Socket accept failed
> java.net.SocketException: Too many open files
>
> HDFS is slow to write a file?
> How is a better strategy to write on HFDS...
> In real aplication we will have a 100.000 request per second to salve in
> hdfs.
> thanks..
>
>



-- 
Harsh J
http://harshj.com

Mime
View raw message