hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christophe Taton" <ta...@apache.org>
Subject Task failing, cause FileSystem close?
Date Tue, 17 Jun 2008 11:18:45 GMT
Hi all,

I am experiencing (through my students) the following error on a 28
nodes cluster running Hadoop 0.16.4.
Some jobs fail with many map tasks aborting with this error message:

2008-06-17 12:25:01,512 WARN org.apache.hadoop.mapred.TaskTracker:
Error running child
java.io.IOException: Filesystem closed
	at org.apache.hadoop.dfs.DFSClient.checkOpen(DFSClient.java:166)
	at org.apache.hadoop.dfs.DFSClient.access$500(DFSClient.java:58)
	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.close(DFSClient.java:1103)
	at java.io.FilterInputStream.close(FilterInputStream.java:155)
	at org.apache.hadoop.io.SequenceFile$Reader.close(SequenceFile.java:1541)
	at org.apache.hadoop.mapred.SequenceFileRecordReader.close(SequenceFileRecordReader.java:125)
	at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:155)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:212)
	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2084)

Any clue why this would happen?

Thanks in advance,

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message