hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amogh Vasekar <am...@yahoo-inc.com>
Subject RE: Program crashed when volume of data getting large
Date Wed, 23 Sep 2009 13:25:27 GMT
Please check the namenode heap usage. Your cluster may be having too many files to handle
/ too little free space. It is generally available in the UI. This is one of the causes I
have seen for the Timeout.

-----Original Message-----
From: Kunsheng Chen [mailto:keyek@yahoo.com] 
Sent: Wednesday, September 23, 2009 6:21 PM
To: common-user@hadoop.apache.org
Subject: Program crashed when volume of data getting large

Hi everyone,

I am running two map-reduce program, they were working good but when the data turns into around
900MB (50000+ files). things weird happen to remind me as below:

'Communication problem with server: java.net.SocketTimeoutException: timed out waiting for
rpc response'

Also there is some other reminder like "fail to allocate memory".

Strange is that the program keeps running and shows mapping and reduce percentage after those
errors....seems it is still progressing in a slow pace.

Does anyone have some idea ?




View raw message