hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ql yan <yql...@gmail.com>
Subject about hadoop:reduce could not read map's output
Date Thu, 10 Sep 2009 12:29:50 GMT
Hi everyone!
I tried hadoop cluster setup on 4 pcs. I ran into a problem about
hadoop-common. when I ran the command'bin/hadoop jar hadoop-*-examples.jar
wordcount input output', the map tasks could complet quickly, but the reduce
phase took very long to complet. I think it's caused by the config, so I
changed the hadoop-site.xml for many times, which didn't work. Here are
hadoop-site.xml:


namevaluedescriptionfs.default.namehdfs://192.*.*.*:9000
mapred.job.tracker192.*.*.*:9001
hadoop.tmp.dir/home/*/hadoop/tmp
dfs.replication2
dfs.name.dir/home/*/hadoop/file/name
dfs.data.dir/home/*/hadoop/file/data
mapred.system.dir/home/*/hadoop/file/mapred/system
mapred.local.dir/home/*/hadoop/file/mapred/local
mapred.temp.dir/home/*/hadoop/file/mapred/tmp
mapred.tasktracker.map.tasks.maximum2
mapred.tasktracker.reduce.tasks.maximum2

Here is a problem from a tasktracker:

2009-09-10 20:08:35,169 INFO org.apache.hadoop.mapred.TaskTracker:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
taskTracker/jobcache/job_200909102001_0001/attempt_200909102001_0001_m_000005_0/output/file.out
in any of the configured local directories

I  am looking forward to your reply! Thanks!

Yan

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message